I0216 10:47:15.099724 9 e2e.go:224] Starting e2e run "b16357a3-50a9-11ea-aa00-0242ac110008" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581850034 - Will randomize all specs Will run 201 of 2164 specs Feb 16 10:47:15.367: INFO: >>> kubeConfig: /root/.kube/config Feb 16 10:47:15.370: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 16 10:47:15.396: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 16 10:47:15.452: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 16 10:47:15.452: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 16 10:47:15.452: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 16 10:47:15.466: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 16 10:47:15.466: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 16 10:47:15.466: INFO: e2e test version: v1.13.12 Feb 16 10:47:15.468: INFO: kube-apiserver version: v1.13.8 SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 10:47:15.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Feb 16 10:47:15.644: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Feb 16 10:47:15.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tr2s2' Feb 16 10:47:18.188: INFO: stderr: "" Feb 16 10:47:18.188: INFO: stdout: "pod/pause created\n" Feb 16 10:47:18.188: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 16 10:47:18.188: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-tr2s2" to be "running and ready" Feb 16 10:47:18.219: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 30.170597ms Feb 16 10:47:20.447: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25886706s Feb 16 10:47:22.461: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.272441677s Feb 16 10:47:24.501: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.313127023s Feb 16 10:47:26.525: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.336258711s Feb 16 10:47:28.568: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.379411031s Feb 16 10:47:28.568: INFO: Pod "pause" satisfied condition "running and ready" Feb 16 10:47:28.568: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Feb 16 10:47:28.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-tr2s2' Feb 16 10:47:28.907: INFO: stderr: "" Feb 16 10:47:28.907: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 16 10:47:28.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-tr2s2' Feb 16 10:47:29.063: INFO: stderr: "" Feb 16 10:47:29.063: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 16 10:47:29.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-tr2s2' Feb 16 10:47:29.227: INFO: stderr: "" Feb 16 10:47:29.227: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 16 10:47:29.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-tr2s2' Feb 16 10:47:29.354: INFO: stderr: "" Feb 16 10:47:29.354: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Feb 16 10:47:29.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tr2s2' Feb 16 10:47:29.529: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 16 10:47:29.529: INFO: stdout: "pod \"pause\" force deleted\n" Feb 16 10:47:29.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-tr2s2' Feb 16 10:47:29.747: INFO: stderr: "No resources found.\n" Feb 16 10:47:29.747: INFO: stdout: "" Feb 16 10:47:29.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-tr2s2 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 16 10:47:29.870: INFO: stderr: "" Feb 16 10:47:29.870: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 10:47:29.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tr2s2" for this suite. Feb 16 10:47:36.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 10:47:37.164: INFO: namespace: e2e-tests-kubectl-tr2s2, resource: bindings, ignored listing per whitelist Feb 16 10:47:37.261: INFO: namespace e2e-tests-kubectl-tr2s2 deletion completed in 7.376042163s • [SLOW TEST:21.792 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 10:47:37.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Feb 16 10:47:37.503: INFO: Waiting up to 5m0s for pod "client-containers-bf267800-50a9-11ea-aa00-0242ac110008" in namespace "e2e-tests-containers-6slkz" to be "success or failure" Feb 16 10:47:37.567: INFO: Pod "client-containers-bf267800-50a9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 64.617398ms Feb 16 10:47:39.597: INFO: Pod "client-containers-bf267800-50a9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094558442s Feb 16 10:47:41.618: INFO: Pod "client-containers-bf267800-50a9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115629222s Feb 16 10:47:43.777: INFO: Pod "client-containers-bf267800-50a9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.274070551s Feb 16 10:47:45.784: INFO: Pod "client-containers-bf267800-50a9-11ea-aa00-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 8.281871056s Feb 16 10:47:47.807: INFO: Pod "client-containers-bf267800-50a9-11ea-aa00-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 10.304550646s Feb 16 10:47:49.828: INFO: Pod "client-containers-bf267800-50a9-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.325576129s STEP: Saw pod success Feb 16 10:47:49.828: INFO: Pod "client-containers-bf267800-50a9-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 10:47:49.839: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-bf267800-50a9-11ea-aa00-0242ac110008 container test-container: STEP: delete the pod Feb 16 10:47:49.931: INFO: Waiting for pod client-containers-bf267800-50a9-11ea-aa00-0242ac110008 to disappear Feb 16 10:47:49.952: INFO: Pod client-containers-bf267800-50a9-11ea-aa00-0242ac110008 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 10:47:49.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-6slkz" for this suite. Feb 16 10:47:56.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 10:47:56.347: INFO: namespace: e2e-tests-containers-6slkz, resource: bindings, ignored listing per whitelist Feb 16 10:47:56.353: INFO: namespace e2e-tests-containers-6slkz deletion completed in 6.386418715s • [SLOW TEST:19.092 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 10:47:56.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 16 10:47:56.566: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ca77f356-50a9-11ea-a994-fa163e34d433", Controller:(*bool)(0xc000ae6932), BlockOwnerDeletion:(*bool)(0xc000ae6933)}} Feb 16 10:47:56.584: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ca72bd7c-50a9-11ea-a994-fa163e34d433", Controller:(*bool)(0xc000ae6ad2), BlockOwnerDeletion:(*bool)(0xc000ae6ad3)}} Feb 16 10:47:56.606: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ca74cc37-50a9-11ea-a994-fa163e34d433", Controller:(*bool)(0xc000ea4b7a), BlockOwnerDeletion:(*bool)(0xc000ea4b7b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 10:48:01.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-5l5hh" for this suite. Feb 16 10:48:09.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 10:48:09.762: INFO: namespace: e2e-tests-gc-5l5hh, resource: bindings, ignored listing per whitelist Feb 16 10:48:09.866: INFO: namespace e2e-tests-gc-5l5hh deletion completed in 8.220784136s • [SLOW TEST:13.513 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 10:48:09.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 16 10:48:10.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-s7qn8' Feb 16 10:48:10.281: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 16 10:48:10.281: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Feb 16 10:48:10.323: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Feb 16 10:48:10.409: INFO: scanned /root for discovery docs: Feb 16 10:48:10.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-s7qn8' Feb 16 10:48:36.871: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 16 10:48:36.871: INFO: stdout: "Created e2e-test-nginx-rc-a3afc7fa0d289b1543d44de90bbc4310\nScaling up e2e-test-nginx-rc-a3afc7fa0d289b1543d44de90bbc4310 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a3afc7fa0d289b1543d44de90bbc4310 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a3afc7fa0d289b1543d44de90bbc4310 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Feb 16 10:48:36.871: INFO: stdout: "Created e2e-test-nginx-rc-a3afc7fa0d289b1543d44de90bbc4310\nScaling up e2e-test-nginx-rc-a3afc7fa0d289b1543d44de90bbc4310 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a3afc7fa0d289b1543d44de90bbc4310 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a3afc7fa0d289b1543d44de90bbc4310 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Feb 16 10:48:36.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-s7qn8' Feb 16 10:48:37.088: INFO: stderr: "" Feb 16 10:48:37.088: INFO: stdout: "e2e-test-nginx-rc-a3afc7fa0d289b1543d44de90bbc4310-778bc " Feb 16 10:48:37.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a3afc7fa0d289b1543d44de90bbc4310-778bc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s7qn8' Feb 16 10:48:37.212: INFO: stderr: "" Feb 16 10:48:37.212: INFO: stdout: "true" Feb 16 10:48:37.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a3afc7fa0d289b1543d44de90bbc4310-778bc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s7qn8' Feb 16 10:48:37.332: INFO: stderr: "" Feb 16 10:48:37.332: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Feb 16 10:48:37.332: INFO: e2e-test-nginx-rc-a3afc7fa0d289b1543d44de90bbc4310-778bc is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Feb 16 10:48:37.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-s7qn8' Feb 16 10:48:37.484: INFO: stderr: "" Feb 16 10:48:37.484: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 10:48:37.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-s7qn8" for this suite. Feb 16 10:49:01.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 10:49:01.640: INFO: namespace: e2e-tests-kubectl-s7qn8, resource: bindings, ignored listing per whitelist Feb 16 10:49:01.720: INFO: namespace e2e-tests-kubectl-s7qn8 deletion completed in 24.229469393s • [SLOW TEST:51.854 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 10:49:01.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 16 10:49:01.958: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f17d1ba3-50a9-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-zh2qp" to be "success or failure" Feb 16 10:49:01.962: INFO: Pod "downwardapi-volume-f17d1ba3-50a9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.42194ms Feb 16 10:49:04.063: INFO: Pod "downwardapi-volume-f17d1ba3-50a9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104776832s Feb 16 10:49:06.080: INFO: Pod "downwardapi-volume-f17d1ba3-50a9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121735429s Feb 16 10:49:08.477: INFO: Pod "downwardapi-volume-f17d1ba3-50a9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.518630025s Feb 16 10:49:10.501: INFO: Pod "downwardapi-volume-f17d1ba3-50a9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.543452802s Feb 16 10:49:12.558: INFO: Pod "downwardapi-volume-f17d1ba3-50a9-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.599635399s STEP: Saw pod success Feb 16 10:49:12.558: INFO: Pod "downwardapi-volume-f17d1ba3-50a9-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 10:49:12.573: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f17d1ba3-50a9-11ea-aa00-0242ac110008 container client-container: STEP: delete the pod Feb 16 10:49:12.785: INFO: Waiting for pod downwardapi-volume-f17d1ba3-50a9-11ea-aa00-0242ac110008 to disappear Feb 16 10:49:12.792: INFO: Pod downwardapi-volume-f17d1ba3-50a9-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 10:49:12.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zh2qp" for this suite. Feb 16 10:49:18.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 10:49:18.998: INFO: namespace: e2e-tests-projected-zh2qp, resource: bindings, ignored listing per whitelist Feb 16 10:49:19.205: INFO: namespace e2e-tests-projected-zh2qp deletion completed in 6.399983655s • [SLOW TEST:17.484 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 10:49:19.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Feb 16 10:49:19.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Feb 16 10:49:19.775: INFO: stderr: "" Feb 16 10:49:19.775: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 10:49:19.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-j99pf" for this suite. Feb 16 10:49:25.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 10:49:25.956: INFO: namespace: e2e-tests-kubectl-j99pf, resource: bindings, ignored listing per whitelist Feb 16 10:49:25.999: INFO: namespace e2e-tests-kubectl-j99pf deletion completed in 6.207752177s • [SLOW TEST:6.794 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 10:49:26.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-ffef8752-50a9-11ea-aa00-0242ac110008 STEP: Creating a pod to test consume secrets Feb 16 10:49:26.196: INFO: Waiting up to 5m0s for pod "pod-secrets-fff06542-50a9-11ea-aa00-0242ac110008" in namespace "e2e-tests-secrets-m6v9s" to be "success or failure" Feb 16 10:49:26.226: INFO: Pod "pod-secrets-fff06542-50a9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 29.776637ms Feb 16 10:49:28.243: INFO: Pod "pod-secrets-fff06542-50a9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046298047s Feb 16 10:49:30.255: INFO: Pod "pod-secrets-fff06542-50a9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058323538s Feb 16 10:49:32.606: INFO: Pod "pod-secrets-fff06542-50a9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.409922873s Feb 16 10:49:34.937: INFO: Pod "pod-secrets-fff06542-50a9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.74055111s Feb 16 10:49:36.951: INFO: Pod "pod-secrets-fff06542-50a9-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.754640942s STEP: Saw pod success Feb 16 10:49:36.951: INFO: Pod "pod-secrets-fff06542-50a9-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 10:49:36.956: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-fff06542-50a9-11ea-aa00-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 16 10:49:38.281: INFO: Waiting for pod pod-secrets-fff06542-50a9-11ea-aa00-0242ac110008 to disappear Feb 16 10:49:38.287: INFO: Pod pod-secrets-fff06542-50a9-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 10:49:38.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-m6v9s" for this suite. Feb 16 10:49:44.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 10:49:44.457: INFO: namespace: e2e-tests-secrets-m6v9s, resource: bindings, ignored listing per whitelist Feb 16 10:49:44.528: INFO: namespace e2e-tests-secrets-m6v9s deletion completed in 6.235728531s • [SLOW TEST:18.528 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 10:49:44.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Feb 16 10:49:44.740: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Feb 16 10:49:44.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-shh9m' Feb 16 10:49:45.292: INFO: stderr: "" Feb 16 10:49:45.292: INFO: stdout: "service/redis-slave created\n" Feb 16 10:49:45.293: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Feb 16 10:49:45.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-shh9m' Feb 16 10:49:45.906: INFO: stderr: "" Feb 16 10:49:45.906: INFO: stdout: "service/redis-master created\n" Feb 16 10:49:45.907: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 16 10:49:45.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-shh9m' Feb 16 10:49:46.390: INFO: stderr: "" Feb 16 10:49:46.390: INFO: stdout: "service/frontend created\n" Feb 16 10:49:46.391: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Feb 16 10:49:46.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-shh9m' Feb 16 10:49:46.901: INFO: stderr: "" Feb 16 10:49:46.901: INFO: stdout: "deployment.extensions/frontend created\n" Feb 16 10:49:46.902: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 16 10:49:46.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-shh9m' Feb 16 10:49:47.305: INFO: stderr: "" Feb 16 10:49:47.305: INFO: stdout: "deployment.extensions/redis-master created\n" Feb 16 10:49:47.306: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Feb 16 10:49:47.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-shh9m' Feb 16 10:49:47.744: INFO: stderr: "" Feb 16 10:49:47.744: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Feb 16 10:49:47.744: INFO: Waiting for all frontend pods to be Running. Feb 16 10:50:17.797: INFO: Waiting for frontend to serve content. Feb 16 10:50:18.452: INFO: Trying to add a new entry to the guestbook. Feb 16 10:50:18.573: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Feb 16 10:50:18.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-shh9m' Feb 16 10:50:19.160: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 16 10:50:19.160: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Feb 16 10:50:19.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-shh9m' Feb 16 10:50:19.351: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 16 10:50:19.351: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 16 10:50:19.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-shh9m' Feb 16 10:50:19.581: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 16 10:50:19.581: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 16 10:50:19.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-shh9m' Feb 16 10:50:19.874: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 16 10:50:19.874: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 16 10:50:19.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-shh9m' Feb 16 10:50:20.218: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 16 10:50:20.218: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 16 10:50:20.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-shh9m' Feb 16 10:50:20.580: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 16 10:50:20.580: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 10:50:20.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-shh9m" for this suite. Feb 16 10:51:06.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 10:51:06.839: INFO: namespace: e2e-tests-kubectl-shh9m, resource: bindings, ignored listing per whitelist Feb 16 10:51:06.950: INFO: namespace e2e-tests-kubectl-shh9m deletion completed in 46.342250119s • [SLOW TEST:82.421 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 10:51:06.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Feb 16 10:51:17.334: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 10:51:45.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-sv8vx" for this suite. Feb 16 10:51:51.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 10:51:51.904: INFO: namespace: e2e-tests-namespaces-sv8vx, resource: bindings, ignored listing per whitelist Feb 16 10:51:51.936: INFO: namespace e2e-tests-namespaces-sv8vx deletion completed in 6.299890577s STEP: Destroying namespace "e2e-tests-nsdeletetest-bnjdv" for this suite. Feb 16 10:51:51.942: INFO: Namespace e2e-tests-nsdeletetest-bnjdv was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-4zwzp" for this suite. Feb 16 10:51:58.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 10:51:58.106: INFO: namespace: e2e-tests-nsdeletetest-4zwzp, resource: bindings, ignored listing per whitelist Feb 16 10:51:58.331: INFO: namespace e2e-tests-nsdeletetest-4zwzp deletion completed in 6.389164281s • [SLOW TEST:51.381 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 10:51:58.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 10:52:08.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-kd6x2" for this suite. Feb 16 10:52:54.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 10:52:54.823: INFO: namespace: e2e-tests-kubelet-test-kd6x2, resource: bindings, ignored listing per whitelist Feb 16 10:52:54.898: INFO: namespace e2e-tests-kubelet-test-kd6x2 deletion completed in 46.204685801s • [SLOW TEST:56.567 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 10:52:54.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0216 10:53:09.437208 9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 16 10:53:09.437: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 10:53:09.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-9nlgr" for this suite. Feb 16 10:53:27.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 10:53:30.293: INFO: namespace: e2e-tests-gc-9nlgr, resource: bindings, ignored listing per whitelist Feb 16 10:53:30.338: INFO: namespace e2e-tests-gc-9nlgr deletion completed in 20.894724373s • [SLOW TEST:35.440 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 10:53:30.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 16 10:53:33.586: INFO: Number of nodes with available pods: 0 Feb 16 10:53:33.587: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:34.776: INFO: Number of nodes with available pods: 0 Feb 16 10:53:34.776: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:35.612: INFO: Number of nodes with available pods: 0 Feb 16 10:53:35.612: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:36.652: INFO: Number of nodes with available pods: 0 Feb 16 10:53:36.653: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:37.634: INFO: Number of nodes with available pods: 0 Feb 16 10:53:37.634: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:38.620: INFO: Number of nodes with available pods: 0 Feb 16 10:53:38.621: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:41.337: INFO: Number of nodes with available pods: 0 Feb 16 10:53:41.337: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:41.607: INFO: Number of nodes with available pods: 0 Feb 16 10:53:41.607: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:42.635: INFO: Number of nodes with available pods: 0 Feb 16 10:53:42.635: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:43.621: INFO: Number of nodes with available pods: 0 Feb 16 10:53:43.622: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:44.623: INFO: Number of nodes with available pods: 1 Feb 16 10:53:44.623: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Feb 16 10:53:44.724: INFO: Number of nodes with available pods: 0 Feb 16 10:53:44.724: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:45.748: INFO: Number of nodes with available pods: 0 Feb 16 10:53:45.748: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:46.754: INFO: Number of nodes with available pods: 0 Feb 16 10:53:46.754: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:47.989: INFO: Number of nodes with available pods: 0 Feb 16 10:53:47.989: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:48.765: INFO: Number of nodes with available pods: 0 Feb 16 10:53:48.765: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:49.752: INFO: Number of nodes with available pods: 0 Feb 16 10:53:49.752: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:50.748: INFO: Number of nodes with available pods: 0 Feb 16 10:53:50.748: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:51.756: INFO: Number of nodes with available pods: 0 Feb 16 10:53:51.756: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:52.748: INFO: Number of nodes with available pods: 0 Feb 16 10:53:52.748: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:53.744: INFO: Number of nodes with available pods: 0 Feb 16 10:53:53.744: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:54.754: INFO: Number of nodes with available pods: 0 Feb 16 10:53:54.754: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:55.743: INFO: Number of nodes with available pods: 0 Feb 16 10:53:55.743: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:56.746: INFO: Number of nodes with available pods: 0 Feb 16 10:53:56.746: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:57.748: INFO: Number of nodes with available pods: 0 Feb 16 10:53:57.748: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:58.762: INFO: Number of nodes with available pods: 0 Feb 16 10:53:58.762: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:53:59.741: INFO: Number of nodes with available pods: 0 Feb 16 10:53:59.741: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:54:00.744: INFO: Number of nodes with available pods: 0 Feb 16 10:54:00.744: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:54:01.764: INFO: Number of nodes with available pods: 0 Feb 16 10:54:01.764: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:54:02.915: INFO: Number of nodes with available pods: 0 Feb 16 10:54:02.916: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:54:03.932: INFO: Number of nodes with available pods: 0 Feb 16 10:54:03.932: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:54:04.755: INFO: Number of nodes with available pods: 0 Feb 16 10:54:04.755: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:54:05.742: INFO: Number of nodes with available pods: 0 Feb 16 10:54:05.742: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:54:06.839: INFO: Number of nodes with available pods: 0 Feb 16 10:54:06.839: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:54:08.321: INFO: Number of nodes with available pods: 0 Feb 16 10:54:08.322: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:54:09.006: INFO: Number of nodes with available pods: 0 Feb 16 10:54:09.006: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:54:09.762: INFO: Number of nodes with available pods: 0 Feb 16 10:54:09.762: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 16 10:54:10.793: INFO: Number of nodes with available pods: 1 Feb 16 10:54:10.793: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-9fg9z, will wait for the garbage collector to delete the pods Feb 16 10:54:10.870: INFO: Deleting DaemonSet.extensions daemon-set took: 17.359268ms Feb 16 10:54:10.971: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.412689ms Feb 16 10:54:19.078: INFO: Number of nodes with available pods: 0 Feb 16 10:54:19.078: INFO: Number of running nodes: 0, number of available pods: 0 Feb 16 10:54:19.086: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-9fg9z/daemonsets","resourceVersion":"21855055"},"items":null} Feb 16 10:54:19.092: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-9fg9z/pods","resourceVersion":"21855055"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 10:54:19.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-9fg9z" for this suite. Feb 16 10:54:25.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 10:54:25.280: INFO: namespace: e2e-tests-daemonsets-9fg9z, resource: bindings, ignored listing per whitelist Feb 16 10:54:25.297: INFO: namespace e2e-tests-daemonsets-9fg9z deletion completed in 6.19001612s • [SLOW TEST:54.958 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 10:54:25.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-b2687ca6-50aa-11ea-aa00-0242ac110008 STEP: Creating secret with name s-test-opt-upd-b2687e17-50aa-11ea-aa00-0242ac110008 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b2687ca6-50aa-11ea-aa00-0242ac110008 STEP: Updating secret s-test-opt-upd-b2687e17-50aa-11ea-aa00-0242ac110008 STEP: Creating secret with name s-test-opt-create-b2687e4a-50aa-11ea-aa00-0242ac110008 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 10:56:02.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rkxxm" for this suite. Feb 16 10:56:26.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 10:56:26.191: INFO: namespace: e2e-tests-projected-rkxxm, resource: bindings, ignored listing per whitelist Feb 16 10:56:26.318: INFO: namespace e2e-tests-projected-rkxxm deletion completed in 24.294077816s • [SLOW TEST:121.021 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 10:56:26.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 16 10:56:26.709: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa92f585-50aa-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-cz8gj" to be "success or failure" Feb 16 10:56:26.725: INFO: Pod "downwardapi-volume-fa92f585-50aa-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.01604ms Feb 16 10:56:28.894: INFO: Pod "downwardapi-volume-fa92f585-50aa-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184919502s Feb 16 10:56:30.908: INFO: Pod "downwardapi-volume-fa92f585-50aa-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.19953793s Feb 16 10:56:32.925: INFO: Pod "downwardapi-volume-fa92f585-50aa-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.215767563s Feb 16 10:56:34.939: INFO: Pod "downwardapi-volume-fa92f585-50aa-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.229632749s Feb 16 10:56:36.951: INFO: Pod "downwardapi-volume-fa92f585-50aa-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.242077674s STEP: Saw pod success Feb 16 10:56:36.951: INFO: Pod "downwardapi-volume-fa92f585-50aa-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 10:56:36.956: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fa92f585-50aa-11ea-aa00-0242ac110008 container client-container: STEP: delete the pod Feb 16 10:56:37.173: INFO: Waiting for pod downwardapi-volume-fa92f585-50aa-11ea-aa00-0242ac110008 to disappear Feb 16 10:56:37.186: INFO: Pod downwardapi-volume-fa92f585-50aa-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 10:56:37.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cz8gj" for this suite. Feb 16 10:56:43.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 10:56:43.399: INFO: namespace: e2e-tests-projected-cz8gj, resource: bindings, ignored listing per whitelist Feb 16 10:56:43.518: INFO: namespace e2e-tests-projected-cz8gj deletion completed in 6.321658018s • [SLOW TEST:17.200 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 10:56:43.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Feb 16 10:56:43.855: INFO: Waiting up to 5m0s for pod "client-containers-04cbd9af-50ab-11ea-aa00-0242ac110008" in namespace "e2e-tests-containers-5ws6j" to be "success or failure" Feb 16 10:56:44.064: INFO: Pod "client-containers-04cbd9af-50ab-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 208.766498ms Feb 16 10:56:46.078: INFO: Pod "client-containers-04cbd9af-50ab-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223526082s Feb 16 10:56:48.106: INFO: Pod "client-containers-04cbd9af-50ab-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.251419023s Feb 16 10:56:50.210: INFO: Pod "client-containers-04cbd9af-50ab-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.354911234s Feb 16 10:56:52.222: INFO: Pod "client-containers-04cbd9af-50ab-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.366768645s Feb 16 10:56:54.359: INFO: Pod "client-containers-04cbd9af-50ab-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.503905368s Feb 16 10:56:56.381: INFO: Pod "client-containers-04cbd9af-50ab-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.526465149s STEP: Saw pod success Feb 16 10:56:56.382: INFO: Pod "client-containers-04cbd9af-50ab-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 10:56:56.387: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-04cbd9af-50ab-11ea-aa00-0242ac110008 container test-container: STEP: delete the pod Feb 16 10:56:56.597: INFO: Waiting for pod client-containers-04cbd9af-50ab-11ea-aa00-0242ac110008 to disappear Feb 16 10:56:56.619: INFO: Pod client-containers-04cbd9af-50ab-11ea-aa00-0242ac110008 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 10:56:56.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-5ws6j" for this suite. Feb 16 10:57:02.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 10:57:02.758: INFO: namespace: e2e-tests-containers-5ws6j, resource: bindings, ignored listing per whitelist Feb 16 10:57:02.830: INFO: namespace e2e-tests-containers-5ws6j deletion completed in 6.189641968s • [SLOW TEST:19.311 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 10:57:02.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-5rl7g [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-5rl7g STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-5rl7g Feb 16 10:57:02.996: INFO: Found 0 stateful pods, waiting for 1 Feb 16 10:57:13.008: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 16 10:57:13.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 16 10:57:13.579: INFO: stderr: "I0216 10:57:13.188945 649 log.go:172] (0xc0004882c0) (0xc0005da500) Create stream\nI0216 10:57:13.189054 649 log.go:172] (0xc0004882c0) (0xc0005da500) Stream added, broadcasting: 1\nI0216 10:57:13.199117 649 log.go:172] (0xc0004882c0) Reply frame received for 1\nI0216 10:57:13.199165 649 log.go:172] (0xc0004882c0) (0xc0000f05a0) Create stream\nI0216 10:57:13.199172 649 log.go:172] (0xc0004882c0) (0xc0000f05a0) Stream added, broadcasting: 3\nI0216 10:57:13.200699 649 log.go:172] (0xc0004882c0) Reply frame received for 3\nI0216 10:57:13.200744 649 log.go:172] (0xc0004882c0) (0xc0002d6000) Create stream\nI0216 10:57:13.200757 649 log.go:172] (0xc0004882c0) (0xc0002d6000) Stream added, broadcasting: 5\nI0216 10:57:13.202053 649 log.go:172] (0xc0004882c0) Reply frame received for 5\nI0216 10:57:13.363370 649 log.go:172] (0xc0004882c0) Data frame received for 3\nI0216 10:57:13.363521 649 log.go:172] (0xc0000f05a0) (3) Data frame handling\nI0216 10:57:13.363538 649 log.go:172] (0xc0000f05a0) (3) Data frame sent\nI0216 10:57:13.570249 649 log.go:172] (0xc0004882c0) (0xc0000f05a0) Stream removed, broadcasting: 3\nI0216 10:57:13.570437 649 log.go:172] (0xc0004882c0) Data frame received for 1\nI0216 10:57:13.570463 649 log.go:172] (0xc0005da500) (1) Data frame handling\nI0216 10:57:13.570472 649 log.go:172] (0xc0005da500) (1) Data frame sent\nI0216 10:57:13.570481 649 log.go:172] (0xc0004882c0) (0xc0005da500) Stream removed, broadcasting: 1\nI0216 10:57:13.570871 649 log.go:172] (0xc0004882c0) (0xc0002d6000) Stream removed, broadcasting: 5\nI0216 10:57:13.571064 649 log.go:172] (0xc0004882c0) (0xc0005da500) Stream removed, broadcasting: 1\nI0216 10:57:13.571083 649 log.go:172] (0xc0004882c0) (0xc0000f05a0) Stream removed, broadcasting: 3\nI0216 10:57:13.571103 649 log.go:172] (0xc0004882c0) (0xc0002d6000) Stream removed, broadcasting: 5\nI0216 10:57:13.571339 649 log.go:172] (0xc0004882c0) Go away received\n" Feb 16 10:57:13.579: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 16 10:57:13.579: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 16 10:57:13.603: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 16 10:57:13.603: INFO: Waiting for statefulset status.replicas updated to 0 Feb 16 10:57:13.740: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999549s Feb 16 10:57:14.762: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.95820775s Feb 16 10:57:15.940: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.93629572s Feb 16 10:57:16.979: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.758916514s Feb 16 10:57:17.998: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.71965297s Feb 16 10:57:19.124: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.700835777s Feb 16 10:57:20.155: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.574261594s Feb 16 10:57:21.171: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.543793289s Feb 16 10:57:22.191: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.527294385s Feb 16 10:57:23.204: INFO: Verifying statefulset ss doesn't scale past 1 for another 508.200294ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-5rl7g Feb 16 10:57:24.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 10:57:25.086: INFO: stderr: "I0216 10:57:24.429611 672 log.go:172] (0xc000706370) (0xc0007aa640) Create stream\nI0216 10:57:24.430041 672 log.go:172] (0xc000706370) (0xc0007aa640) Stream added, broadcasting: 1\nI0216 10:57:24.440621 672 log.go:172] (0xc000706370) Reply frame received for 1\nI0216 10:57:24.440658 672 log.go:172] (0xc000706370) (0xc0005dedc0) Create stream\nI0216 10:57:24.440669 672 log.go:172] (0xc000706370) (0xc0005dedc0) Stream added, broadcasting: 3\nI0216 10:57:24.442392 672 log.go:172] (0xc000706370) Reply frame received for 3\nI0216 10:57:24.442438 672 log.go:172] (0xc000706370) (0xc000704000) Create stream\nI0216 10:57:24.442463 672 log.go:172] (0xc000706370) (0xc000704000) Stream added, broadcasting: 5\nI0216 10:57:24.446524 672 log.go:172] (0xc000706370) Reply frame received for 5\nI0216 10:57:24.826528 672 log.go:172] (0xc000706370) Data frame received for 3\nI0216 10:57:24.826599 672 log.go:172] (0xc0005dedc0) (3) Data frame handling\nI0216 10:57:24.826606 672 log.go:172] (0xc0005dedc0) (3) Data frame sent\nI0216 10:57:25.068733 672 log.go:172] (0xc000706370) Data frame received for 1\nI0216 10:57:25.069013 672 log.go:172] (0xc0007aa640) (1) Data frame handling\nI0216 10:57:25.069103 672 log.go:172] (0xc0007aa640) (1) Data frame sent\nI0216 10:57:25.069159 672 log.go:172] (0xc000706370) (0xc0007aa640) Stream removed, broadcasting: 1\nI0216 10:57:25.070187 672 log.go:172] (0xc000706370) (0xc0005dedc0) Stream removed, broadcasting: 3\nI0216 10:57:25.070647 672 log.go:172] (0xc000706370) (0xc000704000) Stream removed, broadcasting: 5\nI0216 10:57:25.070731 672 log.go:172] (0xc000706370) (0xc0007aa640) Stream removed, broadcasting: 1\nI0216 10:57:25.070740 672 log.go:172] (0xc000706370) (0xc0005dedc0) Stream removed, broadcasting: 3\nI0216 10:57:25.070749 672 log.go:172] (0xc000706370) (0xc000704000) Stream removed, broadcasting: 5\nI0216 10:57:25.071397 672 log.go:172] (0xc000706370) Go away received\n" Feb 16 10:57:25.087: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 16 10:57:25.087: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 16 10:57:25.141: INFO: Found 1 stateful pods, waiting for 3 Feb 16 10:57:35.164: INFO: Found 2 stateful pods, waiting for 3 Feb 16 10:57:45.223: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 16 10:57:45.223: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 16 10:57:45.223: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 16 10:57:55.351: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 16 10:57:55.351: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 16 10:57:55.351: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 16 10:57:55.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 16 10:57:56.111: INFO: stderr: "I0216 10:57:55.721116 695 log.go:172] (0xc0006de2c0) (0xc000125540) Create stream\nI0216 10:57:55.721292 695 log.go:172] (0xc0006de2c0) (0xc000125540) Stream added, broadcasting: 1\nI0216 10:57:55.730387 695 log.go:172] (0xc0006de2c0) Reply frame received for 1\nI0216 10:57:55.730481 695 log.go:172] (0xc0006de2c0) (0xc0002e6000) Create stream\nI0216 10:57:55.730495 695 log.go:172] (0xc0006de2c0) (0xc0002e6000) Stream added, broadcasting: 3\nI0216 10:57:55.734593 695 log.go:172] (0xc0006de2c0) Reply frame received for 3\nI0216 10:57:55.734651 695 log.go:172] (0xc0006de2c0) (0xc000698000) Create stream\nI0216 10:57:55.734669 695 log.go:172] (0xc0006de2c0) (0xc000698000) Stream added, broadcasting: 5\nI0216 10:57:55.736449 695 log.go:172] (0xc0006de2c0) Reply frame received for 5\nI0216 10:57:55.882837 695 log.go:172] (0xc0006de2c0) Data frame received for 3\nI0216 10:57:55.882893 695 log.go:172] (0xc0002e6000) (3) Data frame handling\nI0216 10:57:55.882916 695 log.go:172] (0xc0002e6000) (3) Data frame sent\nI0216 10:57:56.101634 695 log.go:172] (0xc0006de2c0) (0xc0002e6000) Stream removed, broadcasting: 3\nI0216 10:57:56.101935 695 log.go:172] (0xc0006de2c0) Data frame received for 1\nI0216 10:57:56.101987 695 log.go:172] (0xc0006de2c0) (0xc000698000) Stream removed, broadcasting: 5\nI0216 10:57:56.102036 695 log.go:172] (0xc000125540) (1) Data frame handling\nI0216 10:57:56.102076 695 log.go:172] (0xc000125540) (1) Data frame sent\nI0216 10:57:56.102107 695 log.go:172] (0xc0006de2c0) (0xc000125540) Stream removed, broadcasting: 1\nI0216 10:57:56.102119 695 log.go:172] (0xc0006de2c0) Go away received\nI0216 10:57:56.102492 695 log.go:172] (0xc0006de2c0) (0xc000125540) Stream removed, broadcasting: 1\nI0216 10:57:56.102512 695 log.go:172] (0xc0006de2c0) (0xc0002e6000) Stream removed, broadcasting: 3\nI0216 10:57:56.102522 695 log.go:172] (0xc0006de2c0) (0xc000698000) Stream removed, broadcasting: 5\n" Feb 16 10:57:56.111: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 16 10:57:56.111: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 16 10:57:56.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 16 10:57:56.972: INFO: stderr: "I0216 10:57:56.450435 718 log.go:172] (0xc0008542c0) (0xc000722640) Create stream\nI0216 10:57:56.450588 718 log.go:172] (0xc0008542c0) (0xc000722640) Stream added, broadcasting: 1\nI0216 10:57:56.456321 718 log.go:172] (0xc0008542c0) Reply frame received for 1\nI0216 10:57:56.456361 718 log.go:172] (0xc0008542c0) (0xc000676dc0) Create stream\nI0216 10:57:56.456374 718 log.go:172] (0xc0008542c0) (0xc000676dc0) Stream added, broadcasting: 3\nI0216 10:57:56.457923 718 log.go:172] (0xc0008542c0) Reply frame received for 3\nI0216 10:57:56.457964 718 log.go:172] (0xc0008542c0) (0xc0005e2000) Create stream\nI0216 10:57:56.457977 718 log.go:172] (0xc0008542c0) (0xc0005e2000) Stream added, broadcasting: 5\nI0216 10:57:56.458996 718 log.go:172] (0xc0008542c0) Reply frame received for 5\nI0216 10:57:56.835059 718 log.go:172] (0xc0008542c0) Data frame received for 3\nI0216 10:57:56.835085 718 log.go:172] (0xc000676dc0) (3) Data frame handling\nI0216 10:57:56.835100 718 log.go:172] (0xc000676dc0) (3) Data frame sent\nI0216 10:57:56.962316 718 log.go:172] (0xc0008542c0) (0xc000676dc0) Stream removed, broadcasting: 3\nI0216 10:57:56.962750 718 log.go:172] (0xc0008542c0) Data frame received for 1\nI0216 10:57:56.962816 718 log.go:172] (0xc000722640) (1) Data frame handling\nI0216 10:57:56.962857 718 log.go:172] (0xc000722640) (1) Data frame sent\nI0216 10:57:56.962900 718 log.go:172] (0xc0008542c0) (0xc000722640) Stream removed, broadcasting: 1\nI0216 10:57:56.963179 718 log.go:172] (0xc0008542c0) (0xc0005e2000) Stream removed, broadcasting: 5\nI0216 10:57:56.963325 718 log.go:172] (0xc0008542c0) Go away received\nI0216 10:57:56.963423 718 log.go:172] (0xc0008542c0) (0xc000722640) Stream removed, broadcasting: 1\nI0216 10:57:56.963449 718 log.go:172] (0xc0008542c0) (0xc000676dc0) Stream removed, broadcasting: 3\nI0216 10:57:56.963460 718 log.go:172] (0xc0008542c0) (0xc0005e2000) Stream removed, broadcasting: 5\n" Feb 16 10:57:56.972: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 16 10:57:56.972: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 16 10:57:56.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 16 10:57:57.473: INFO: stderr: "I0216 10:57:57.165495 740 log.go:172] (0xc00077c370) (0xc00065b540) Create stream\nI0216 10:57:57.165777 740 log.go:172] (0xc00077c370) (0xc00065b540) Stream added, broadcasting: 1\nI0216 10:57:57.170856 740 log.go:172] (0xc00077c370) Reply frame received for 1\nI0216 10:57:57.170896 740 log.go:172] (0xc00077c370) (0xc0006ae000) Create stream\nI0216 10:57:57.170903 740 log.go:172] (0xc00077c370) (0xc0006ae000) Stream added, broadcasting: 3\nI0216 10:57:57.172265 740 log.go:172] (0xc00077c370) Reply frame received for 3\nI0216 10:57:57.172313 740 log.go:172] (0xc00077c370) (0xc000768000) Create stream\nI0216 10:57:57.172322 740 log.go:172] (0xc00077c370) (0xc000768000) Stream added, broadcasting: 5\nI0216 10:57:57.173704 740 log.go:172] (0xc00077c370) Reply frame received for 5\nI0216 10:57:57.309598 740 log.go:172] (0xc00077c370) Data frame received for 3\nI0216 10:57:57.309647 740 log.go:172] (0xc0006ae000) (3) Data frame handling\nI0216 10:57:57.309662 740 log.go:172] (0xc0006ae000) (3) Data frame sent\nI0216 10:57:57.461092 740 log.go:172] (0xc00077c370) Data frame received for 1\nI0216 10:57:57.461143 740 log.go:172] (0xc00065b540) (1) Data frame handling\nI0216 10:57:57.461159 740 log.go:172] (0xc00065b540) (1) Data frame sent\nI0216 10:57:57.461173 740 log.go:172] (0xc00077c370) (0xc00065b540) Stream removed, broadcasting: 1\nI0216 10:57:57.462786 740 log.go:172] (0xc00077c370) (0xc000768000) Stream removed, broadcasting: 5\nI0216 10:57:57.463016 740 log.go:172] (0xc00077c370) (0xc0006ae000) Stream removed, broadcasting: 3\nI0216 10:57:57.463060 740 log.go:172] (0xc00077c370) (0xc00065b540) Stream removed, broadcasting: 1\nI0216 10:57:57.463079 740 log.go:172] (0xc00077c370) (0xc0006ae000) Stream removed, broadcasting: 3\nI0216 10:57:57.463088 740 log.go:172] (0xc00077c370) (0xc000768000) Stream removed, broadcasting: 5\n" Feb 16 10:57:57.473: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 16 10:57:57.473: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 16 10:57:57.473: INFO: Waiting for statefulset status.replicas updated to 0 Feb 16 10:57:57.510: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Feb 16 10:58:07.541: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 16 10:58:07.541: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 16 10:58:07.541: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 16 10:58:07.579: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999401s Feb 16 10:58:08.675: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.981131878s Feb 16 10:58:09.691: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.8850083s Feb 16 10:58:10.702: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.869329331s Feb 16 10:58:11.867: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.858383765s Feb 16 10:58:12.884: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.692500954s Feb 16 10:58:14.158: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.676358524s Feb 16 10:58:15.190: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.402513741s Feb 16 10:58:16.217: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.370230096s Feb 16 10:58:17.231: INFO: Verifying statefulset ss doesn't scale past 3 for another 343.40959ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-5rl7g Feb 16 10:58:18.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 10:58:18.944: INFO: stderr: "I0216 10:58:18.560258 761 log.go:172] (0xc000740370) (0xc0006632c0) Create stream\nI0216 10:58:18.560444 761 log.go:172] (0xc000740370) (0xc0006632c0) Stream added, broadcasting: 1\nI0216 10:58:18.569996 761 log.go:172] (0xc000740370) Reply frame received for 1\nI0216 10:58:18.570031 761 log.go:172] (0xc000740370) (0xc000698000) Create stream\nI0216 10:58:18.570047 761 log.go:172] (0xc000740370) (0xc000698000) Stream added, broadcasting: 3\nI0216 10:58:18.571342 761 log.go:172] (0xc000740370) Reply frame received for 3\nI0216 10:58:18.571376 761 log.go:172] (0xc000740370) (0xc0006ee000) Create stream\nI0216 10:58:18.571392 761 log.go:172] (0xc000740370) (0xc0006ee000) Stream added, broadcasting: 5\nI0216 10:58:18.572593 761 log.go:172] (0xc000740370) Reply frame received for 5\nI0216 10:58:18.790246 761 log.go:172] (0xc000740370) Data frame received for 3\nI0216 10:58:18.790323 761 log.go:172] (0xc000698000) (3) Data frame handling\nI0216 10:58:18.790348 761 log.go:172] (0xc000698000) (3) Data frame sent\nI0216 10:58:18.930508 761 log.go:172] (0xc000740370) Data frame received for 1\nI0216 10:58:18.930596 761 log.go:172] (0xc0006632c0) (1) Data frame handling\nI0216 10:58:18.930627 761 log.go:172] (0xc0006632c0) (1) Data frame sent\nI0216 10:58:18.930658 761 log.go:172] (0xc000740370) (0xc0006632c0) Stream removed, broadcasting: 1\nI0216 10:58:18.930721 761 log.go:172] (0xc000740370) (0xc000698000) Stream removed, broadcasting: 3\nI0216 10:58:18.931025 761 log.go:172] (0xc000740370) (0xc0006ee000) Stream removed, broadcasting: 5\nI0216 10:58:18.931054 761 log.go:172] (0xc000740370) (0xc0006632c0) Stream removed, broadcasting: 1\nI0216 10:58:18.931073 761 log.go:172] (0xc000740370) (0xc000698000) Stream removed, broadcasting: 3\nI0216 10:58:18.931085 761 log.go:172] (0xc000740370) (0xc0006ee000) Stream removed, broadcasting: 5\n" Feb 16 10:58:18.944: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 16 10:58:18.944: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 16 10:58:18.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 10:58:19.546: INFO: stderr: "I0216 10:58:19.159476 783 log.go:172] (0xc000138840) (0xc0005a9400) Create stream\nI0216 10:58:19.159573 783 log.go:172] (0xc000138840) (0xc0005a9400) Stream added, broadcasting: 1\nI0216 10:58:19.164437 783 log.go:172] (0xc000138840) Reply frame received for 1\nI0216 10:58:19.164480 783 log.go:172] (0xc000138840) (0xc0005a94a0) Create stream\nI0216 10:58:19.164500 783 log.go:172] (0xc000138840) (0xc0005a94a0) Stream added, broadcasting: 3\nI0216 10:58:19.165847 783 log.go:172] (0xc000138840) Reply frame received for 3\nI0216 10:58:19.165870 783 log.go:172] (0xc000138840) (0xc0005a9540) Create stream\nI0216 10:58:19.165877 783 log.go:172] (0xc000138840) (0xc0005a9540) Stream added, broadcasting: 5\nI0216 10:58:19.167031 783 log.go:172] (0xc000138840) Reply frame received for 5\nI0216 10:58:19.320459 783 log.go:172] (0xc000138840) Data frame received for 3\nI0216 10:58:19.320662 783 log.go:172] (0xc0005a94a0) (3) Data frame handling\nI0216 10:58:19.320683 783 log.go:172] (0xc0005a94a0) (3) Data frame sent\nI0216 10:58:19.536615 783 log.go:172] (0xc000138840) Data frame received for 1\nI0216 10:58:19.536684 783 log.go:172] (0xc0005a9400) (1) Data frame handling\nI0216 10:58:19.536702 783 log.go:172] (0xc0005a9400) (1) Data frame sent\nI0216 10:58:19.536722 783 log.go:172] (0xc000138840) (0xc0005a9400) Stream removed, broadcasting: 1\nI0216 10:58:19.536815 783 log.go:172] (0xc000138840) (0xc0005a94a0) Stream removed, broadcasting: 3\nI0216 10:58:19.537222 783 log.go:172] (0xc000138840) (0xc0005a9540) Stream removed, broadcasting: 5\nI0216 10:58:19.537265 783 log.go:172] (0xc000138840) Go away received\nI0216 10:58:19.537298 783 log.go:172] (0xc000138840) (0xc0005a9400) Stream removed, broadcasting: 1\nI0216 10:58:19.537331 783 log.go:172] (0xc000138840) (0xc0005a94a0) Stream removed, broadcasting: 3\nI0216 10:58:19.537385 783 log.go:172] (0xc000138840) (0xc0005a9540) Stream removed, broadcasting: 5\n" Feb 16 10:58:19.546: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 16 10:58:19.546: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 16 10:58:19.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 10:58:20.202: INFO: rc: 126 Feb 16 10:58:20.202: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown I0216 10:58:19.889713 805 log.go:172] (0xc00014a6e0) (0xc00065f400) Create stream I0216 10:58:19.889867 805 log.go:172] (0xc00014a6e0) (0xc00065f400) Stream added, broadcasting: 1 I0216 10:58:19.898957 805 log.go:172] (0xc00014a6e0) Reply frame received for 1 I0216 10:58:19.898984 805 log.go:172] (0xc00014a6e0) (0xc00073e000) Create stream I0216 10:58:19.898991 805 log.go:172] (0xc00014a6e0) (0xc00073e000) Stream added, broadcasting: 3 I0216 10:58:19.900056 805 log.go:172] (0xc00014a6e0) Reply frame received for 3 I0216 10:58:19.900078 805 log.go:172] (0xc00014a6e0) (0xc00073e0a0) Create stream I0216 10:58:19.900091 805 log.go:172] (0xc00014a6e0) (0xc00073e0a0) Stream added, broadcasting: 5 I0216 10:58:19.902417 805 log.go:172] (0xc00014a6e0) Reply frame received for 5 I0216 10:58:20.191771 805 log.go:172] (0xc00014a6e0) Data frame received for 3 I0216 10:58:20.191890 805 log.go:172] (0xc00073e000) (3) Data frame handling I0216 10:58:20.191925 805 log.go:172] (0xc00073e000) (3) Data frame sent I0216 10:58:20.194392 805 log.go:172] (0xc00014a6e0) (0xc00073e0a0) Stream removed, broadcasting: 5 I0216 10:58:20.194778 805 log.go:172] (0xc00014a6e0) Data frame received for 1 I0216 10:58:20.194957 805 log.go:172] (0xc00014a6e0) (0xc00073e000) Stream removed, broadcasting: 3 I0216 10:58:20.195048 805 log.go:172] (0xc00065f400) (1) Data frame handling I0216 10:58:20.195072 805 log.go:172] (0xc00065f400) (1) Data frame sent I0216 10:58:20.195089 805 log.go:172] (0xc00014a6e0) (0xc00065f400) Stream removed, broadcasting: 1 I0216 10:58:20.195112 805 log.go:172] (0xc00014a6e0) Go away received I0216 10:58:20.195830 805 log.go:172] (0xc00014a6e0) (0xc00065f400) Stream removed, broadcasting: 1 I0216 10:58:20.195859 805 log.go:172] (0xc00014a6e0) (0xc00073e000) Stream removed, broadcasting: 3 I0216 10:58:20.195877 805 log.go:172] (0xc00014a6e0) (0xc00073e0a0) Stream removed, broadcasting: 5 command terminated with exit code 126 [] 0xc001f6af60 exit status 126 true [0xc001a4e270 0xc001a4e288 0xc001a4e2a8] [0xc001a4e270 0xc001a4e288 0xc001a4e2a8] [0xc001a4e280 0xc001a4e2a0] [0x935700 0x935700] 0xc001c325a0 }: Command stdout: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown stderr: I0216 10:58:19.889713 805 log.go:172] (0xc00014a6e0) (0xc00065f400) Create stream I0216 10:58:19.889867 805 log.go:172] (0xc00014a6e0) (0xc00065f400) Stream added, broadcasting: 1 I0216 10:58:19.898957 805 log.go:172] (0xc00014a6e0) Reply frame received for 1 I0216 10:58:19.898984 805 log.go:172] (0xc00014a6e0) (0xc00073e000) Create stream I0216 10:58:19.898991 805 log.go:172] (0xc00014a6e0) (0xc00073e000) Stream added, broadcasting: 3 I0216 10:58:19.900056 805 log.go:172] (0xc00014a6e0) Reply frame received for 3 I0216 10:58:19.900078 805 log.go:172] (0xc00014a6e0) (0xc00073e0a0) Create stream I0216 10:58:19.900091 805 log.go:172] (0xc00014a6e0) (0xc00073e0a0) Stream added, broadcasting: 5 I0216 10:58:19.902417 805 log.go:172] (0xc00014a6e0) Reply frame received for 5 I0216 10:58:20.191771 805 log.go:172] (0xc00014a6e0) Data frame received for 3 I0216 10:58:20.191890 805 log.go:172] (0xc00073e000) (3) Data frame handling I0216 10:58:20.191925 805 log.go:172] (0xc00073e000) (3) Data frame sent I0216 10:58:20.194392 805 log.go:172] (0xc00014a6e0) (0xc00073e0a0) Stream removed, broadcasting: 5 I0216 10:58:20.194778 805 log.go:172] (0xc00014a6e0) Data frame received for 1 I0216 10:58:20.194957 805 log.go:172] (0xc00014a6e0) (0xc00073e000) Stream removed, broadcasting: 3 I0216 10:58:20.195048 805 log.go:172] (0xc00065f400) (1) Data frame handling I0216 10:58:20.195072 805 log.go:172] (0xc00065f400) (1) Data frame sent I0216 10:58:20.195089 805 log.go:172] (0xc00014a6e0) (0xc00065f400) Stream removed, broadcasting: 1 I0216 10:58:20.195112 805 log.go:172] (0xc00014a6e0) Go away received I0216 10:58:20.195830 805 log.go:172] (0xc00014a6e0) (0xc00065f400) Stream removed, broadcasting: 1 I0216 10:58:20.195859 805 log.go:172] (0xc00014a6e0) (0xc00073e000) Stream removed, broadcasting: 3 I0216 10:58:20.195877 805 log.go:172] (0xc00014a6e0) (0xc00073e0a0) Stream removed, broadcasting: 5 command terminated with exit code 126 error: exit status 126 Feb 16 10:58:30.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 10:58:30.392: INFO: rc: 1 Feb 16 10:58:30.392: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001f6b080 exit status 1 true [0xc001a4e2b0 0xc001a4e2c8 0xc001a4e2e0] [0xc001a4e2b0 0xc001a4e2c8 0xc001a4e2e0] [0xc001a4e2c0 0xc001a4e2d8] [0x935700 0x935700] 0xc001c32b40 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 16 10:58:40.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 10:58:40.531: INFO: rc: 1 Feb 16 10:58:40.531: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001d7d5c0 exit status 1 true [0xc000a3a498 0xc000a3a550 0xc000a3a5a8] [0xc000a3a498 0xc000a3a550 0xc000a3a5a8] [0xc000a3a548 0xc000a3a598] [0x935700 0x935700] 0xc001bf3a40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 10:58:50.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 10:58:50.648: INFO: rc: 1 Feb 16 10:58:50.649: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001751d10 exit status 1 true [0xc0009cadf0 0xc0009cae28 0xc0009cae70] [0xc0009cadf0 0xc0009cae28 0xc0009cae70] [0xc0009cae10 0xc0009cae60] [0x935700 0x935700] 0xc001bfb9e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 10:59:00.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 10:59:00.758: INFO: rc: 1 Feb 16 10:59:00.759: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001387c80 exit status 1 true [0xc00040bda8 0xc00040be08 0xc00040be80] [0xc00040bda8 0xc00040be08 0xc00040be80] [0xc00040bdc8 0xc00040be40] [0x935700 0x935700] 0xc001692540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 10:59:10.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 10:59:10.934: INFO: rc: 1 Feb 16 10:59:10.935: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000329410 exit status 1 true [0xc00000e010 0xc00040ac40 0xc00040ae78] [0xc00000e010 0xc00040ac40 0xc00040ae78] [0xc00040ac00 0xc00040ae20] [0x935700 0x935700] 0xc001b1e6c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 10:59:20.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 10:59:21.040: INFO: rc: 1 Feb 16 10:59:21.041: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013c6120 exit status 1 true [0xc000a3a020 0xc000a3a0d8 0xc000a3a108] [0xc000a3a020 0xc000a3a0d8 0xc000a3a108] [0xc000a3a0a0 0xc000a3a100] [0x935700 0x935700] 0xc00192a3c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 10:59:31.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 10:59:31.166: INFO: rc: 1 Feb 16 10:59:31.166: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000329560 exit status 1 true [0xc00040aea0 0xc00040b048 0xc00040b118] [0xc00040aea0 0xc00040b048 0xc00040b118] [0xc00040b000 0xc00040b090] [0x935700 0x935700] 0xc001b1ede0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 10:59:41.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 10:59:41.497: INFO: rc: 1 Feb 16 10:59:41.497: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0019f0120 exit status 1 true [0xc0009ca028 0xc0009ca110 0xc0009ca280] [0xc0009ca028 0xc0009ca110 0xc0009ca280] [0xc0009ca0f0 0xc0009ca250] [0x935700 0x935700] 0xc001856240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 10:59:51.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 10:59:51.670: INFO: rc: 1 Feb 16 10:59:51.671: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013c6450 exit status 1 true [0xc000a3a158 0xc000a3a1f0 0xc000a3a240] [0xc000a3a158 0xc000a3a1f0 0xc000a3a240] [0xc000a3a1b8 0xc000a3a238] [0x935700 0x935700] 0xc00192a660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:00:01.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:00:01.877: INFO: rc: 1 Feb 16 11:00:01.878: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000329710 exit status 1 true [0xc00040b198 0xc00040b1c0 0xc00040b350] [0xc00040b198 0xc00040b1c0 0xc00040b350] [0xc00040b1b8 0xc00040b2c8] [0x935700 0x935700] 0xc001b1f920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:00:11.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:00:11.978: INFO: rc: 1 Feb 16 11:00:11.979: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013c6570 exit status 1 true [0xc000a3a248 0xc000a3a260 0xc000a3a298] [0xc000a3a248 0xc000a3a260 0xc000a3a298] [0xc000a3a258 0xc000a3a280] [0x935700 0x935700] 0xc00192a900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:00:21.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:00:22.098: INFO: rc: 1 Feb 16 11:00:22.098: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013c6690 exit status 1 true [0xc000a3a2a8 0xc000a3a308 0xc000a3a3a8] [0xc000a3a2a8 0xc000a3a308 0xc000a3a3a8] [0xc000a3a2e8 0xc000a3a390] [0x935700 0x935700] 0xc00192aba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:00:32.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:00:32.273: INFO: rc: 1 Feb 16 11:00:32.274: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000329860 exit status 1 true [0xc00040b450 0xc00040b538 0xc00040b5c0] [0xc00040b450 0xc00040b538 0xc00040b5c0] [0xc00040b500 0xc00040b598] [0x935700 0x935700] 0xc001b1fda0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:00:42.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:00:42.436: INFO: rc: 1 Feb 16 11:00:42.436: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0003299e0 exit status 1 true [0xc00040b5e0 0xc00040b688 0xc00040b6c8] [0xc00040b5e0 0xc00040b688 0xc00040b6c8] [0xc00040b660 0xc00040b6b8] [0x935700 0x935700] 0xc0016920c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:00:52.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:00:52.565: INFO: rc: 1 Feb 16 11:00:52.565: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000329b00 exit status 1 true [0xc00040b728 0xc00040b758 0xc00040b898] [0xc00040b728 0xc00040b758 0xc00040b898] [0xc00040b750 0xc00040b890] [0x935700 0x935700] 0xc001692480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:01:02.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:01:02.686: INFO: rc: 1 Feb 16 11:01:02.686: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001df61b0 exit status 1 true [0xc001a4e000 0xc001a4e018 0xc001a4e030] [0xc001a4e000 0xc001a4e018 0xc001a4e030] [0xc001a4e010 0xc001a4e028] [0x935700 0x935700] 0xc001bf29c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:01:12.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:01:12.879: INFO: rc: 1 Feb 16 11:01:12.879: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001386d50 exit status 1 true [0xc00110a020 0xc00110a040 0xc00110a080] [0xc00110a020 0xc00110a040 0xc00110a080] [0xc00110a030 0xc00110a078] [0x935700 0x935700] 0xc001bfa660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:01:22.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:01:23.023: INFO: rc: 1 Feb 16 11:01:23.023: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000329440 exit status 1 true [0xc00000e010 0xc00110a098 0xc00110a0b0] [0xc00000e010 0xc00110a098 0xc00110a0b0] [0xc00110a088 0xc00110a0a8] [0x935700 0x935700] 0xc001b1e6c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:01:33.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:01:33.151: INFO: rc: 1 Feb 16 11:01:33.151: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001386ea0 exit status 1 true [0xc00040abc8 0xc00040ad18 0xc00040aea0] [0xc00040abc8 0xc00040ad18 0xc00040aea0] [0xc00040ac40 0xc00040ae78] [0x935700 0x935700] 0xc001bfa900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:01:43.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:01:43.321: INFO: rc: 1 Feb 16 11:01:43.321: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0019f0180 exit status 1 true [0xc0009ca028 0xc0009ca110 0xc0009ca280] [0xc0009ca028 0xc0009ca110 0xc0009ca280] [0xc0009ca0f0 0xc0009ca250] [0x935700 0x935700] 0xc0016921e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:01:53.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:01:53.444: INFO: rc: 1 Feb 16 11:01:53.445: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001df6150 exit status 1 true [0xc001a4e000 0xc001a4e018 0xc001a4e030] [0xc001a4e000 0xc001a4e018 0xc001a4e030] [0xc001a4e010 0xc001a4e028] [0x935700 0x935700] 0xc001856240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:02:03.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:02:03.619: INFO: rc: 1 Feb 16 11:02:03.620: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001387140 exit status 1 true [0xc00040af68 0xc00040b068 0xc00040b198] [0xc00040af68 0xc00040b068 0xc00040b198] [0xc00040b048 0xc00040b118] [0x935700 0x935700] 0xc001bfaf60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:02:13.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:02:13.771: INFO: rc: 1 Feb 16 11:02:13.771: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001df6300 exit status 1 true [0xc001a4e038 0xc001a4e050 0xc001a4e068] [0xc001a4e038 0xc001a4e050 0xc001a4e068] [0xc001a4e048 0xc001a4e060] [0x935700 0x935700] 0xc001857980 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:02:23.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:02:23.899: INFO: rc: 1 Feb 16 11:02:23.900: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001387290 exit status 1 true [0xc00040b1b0 0xc00040b278 0xc00040b450] [0xc00040b1b0 0xc00040b278 0xc00040b450] [0xc00040b1c0 0xc00040b350] [0x935700 0x935700] 0xc001bfb320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:02:33.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:02:34.020: INFO: rc: 1 Feb 16 11:02:34.021: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001df6510 exit status 1 true [0xc001a4e070 0xc001a4e088 0xc001a4e0a0] [0xc001a4e070 0xc001a4e088 0xc001a4e0a0] [0xc001a4e080 0xc001a4e098] [0x935700 0x935700] 0xc001857c20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:02:44.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:02:44.169: INFO: rc: 1 Feb 16 11:02:44.170: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001df67e0 exit status 1 true [0xc001a4e0a8 0xc001a4e0c0 0xc001a4e0d8] [0xc001a4e0a8 0xc001a4e0c0 0xc001a4e0d8] [0xc001a4e0b8 0xc001a4e0d0] [0x935700 0x935700] 0xc001857ec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:02:54.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:02:54.249: INFO: rc: 1 Feb 16 11:02:54.249: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0019f0360 exit status 1 true [0xc0009ca338 0xc0009ca458 0xc0009ca5e0] [0xc0009ca338 0xc0009ca458 0xc0009ca5e0] [0xc0009ca390 0xc0009ca548] [0x935700 0x935700] 0xc0016925a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:03:04.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:03:04.393: INFO: rc: 1 Feb 16 11:03:04.393: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0019f04e0 exit status 1 true [0xc0009ca600 0xc0009ca660 0xc0009ca760] [0xc0009ca600 0xc0009ca660 0xc0009ca760] [0xc0009ca648 0xc0009ca720] [0x935700 0x935700] 0xc001bf2060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:03:14.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:03:14.530: INFO: rc: 1 Feb 16 11:03:14.530: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001386de0 exit status 1 true [0xc00000e010 0xc00040ac40 0xc00040ae78] [0xc00000e010 0xc00040ac40 0xc00040ae78] [0xc00040ac00 0xc00040ae20] [0x935700 0x935700] 0xc0016921e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 16 11:03:24.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5rl7g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 16 11:03:24.748: INFO: rc: 1 Feb 16 11:03:24.749: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Feb 16 11:03:24.749: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 16 11:03:24.803: INFO: Deleting all statefulset in ns e2e-tests-statefulset-5rl7g Feb 16 11:03:24.808: INFO: Scaling statefulset ss to 0 Feb 16 11:03:24.832: INFO: Waiting for statefulset status.replicas updated to 0 Feb 16 11:03:24.838: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:03:24.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-5rl7g" for this suite. Feb 16 11:03:32.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:03:33.048: INFO: namespace: e2e-tests-statefulset-5rl7g, resource: bindings, ignored listing per whitelist Feb 16 11:03:33.054: INFO: namespace e2e-tests-statefulset-5rl7g deletion completed in 8.139063113s • [SLOW TEST:390.224 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:03:33.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:03:41.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-5nfxk" for this suite. Feb 16 11:04:35.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:04:35.500: INFO: namespace: e2e-tests-kubelet-test-5nfxk, resource: bindings, ignored listing per whitelist Feb 16 11:04:35.557: INFO: namespace e2e-tests-kubelet-test-5nfxk deletion completed in 54.206449328s • [SLOW TEST:62.502 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:04:35.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 16 11:04:35.828: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e1c069c-50ac-11ea-aa00-0242ac110008" in namespace "e2e-tests-downward-api-dqxgh" to be "success or failure" Feb 16 11:04:35.899: INFO: Pod "downwardapi-volume-1e1c069c-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 71.320074ms Feb 16 11:04:37.919: INFO: Pod "downwardapi-volume-1e1c069c-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091213627s Feb 16 11:04:39.946: INFO: Pod "downwardapi-volume-1e1c069c-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118374745s Feb 16 11:04:42.511: INFO: Pod "downwardapi-volume-1e1c069c-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.683092277s Feb 16 11:04:44.534: INFO: Pod "downwardapi-volume-1e1c069c-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.705857509s Feb 16 11:04:46.564: INFO: Pod "downwardapi-volume-1e1c069c-50ac-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.736263822s STEP: Saw pod success Feb 16 11:04:46.565: INFO: Pod "downwardapi-volume-1e1c069c-50ac-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:04:46.579: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1e1c069c-50ac-11ea-aa00-0242ac110008 container client-container: STEP: delete the pod Feb 16 11:04:47.051: INFO: Waiting for pod downwardapi-volume-1e1c069c-50ac-11ea-aa00-0242ac110008 to disappear Feb 16 11:04:47.074: INFO: Pod downwardapi-volume-1e1c069c-50ac-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:04:47.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-dqxgh" for this suite. Feb 16 11:04:53.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:04:53.295: INFO: namespace: e2e-tests-downward-api-dqxgh, resource: bindings, ignored listing per whitelist Feb 16 11:04:53.307: INFO: namespace e2e-tests-downward-api-dqxgh deletion completed in 6.225169035s • [SLOW TEST:17.749 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:04:53.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:04:53.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-x54mg" for this suite. Feb 16 11:04:59.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:04:59.963: INFO: namespace: e2e-tests-services-x54mg, resource: bindings, ignored listing per whitelist Feb 16 11:04:59.971: INFO: namespace e2e-tests-services-x54mg deletion completed in 6.354854108s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.663 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:04:59.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-2ca1853b-50ac-11ea-aa00-0242ac110008 STEP: Creating a pod to test consume secrets Feb 16 11:05:00.194: INFO: Waiting up to 5m0s for pod "pod-secrets-2ca35a74-50ac-11ea-aa00-0242ac110008" in namespace "e2e-tests-secrets-fmsr7" to be "success or failure" Feb 16 11:05:00.214: INFO: Pod "pod-secrets-2ca35a74-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.867496ms Feb 16 11:05:02.715: INFO: Pod "pod-secrets-2ca35a74-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.521673454s Feb 16 11:05:04.738: INFO: Pod "pod-secrets-2ca35a74-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.544209947s Feb 16 11:05:06.969: INFO: Pod "pod-secrets-2ca35a74-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.775788845s Feb 16 11:05:09.403: INFO: Pod "pod-secrets-2ca35a74-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.209228446s Feb 16 11:05:11.423: INFO: Pod "pod-secrets-2ca35a74-50ac-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.229768947s STEP: Saw pod success Feb 16 11:05:11.424: INFO: Pod "pod-secrets-2ca35a74-50ac-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:05:11.430: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-2ca35a74-50ac-11ea-aa00-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 16 11:05:11.623: INFO: Waiting for pod pod-secrets-2ca35a74-50ac-11ea-aa00-0242ac110008 to disappear Feb 16 11:05:11.645: INFO: Pod pod-secrets-2ca35a74-50ac-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:05:11.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-fmsr7" for this suite. Feb 16 11:05:17.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:05:18.001: INFO: namespace: e2e-tests-secrets-fmsr7, resource: bindings, ignored listing per whitelist Feb 16 11:05:18.035: INFO: namespace e2e-tests-secrets-fmsr7 deletion completed in 6.181169229s • [SLOW TEST:18.064 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:05:18.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 16 11:05:18.262: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 16 11:05:24.320: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 16 11:05:28.351: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 16 11:05:30.366: INFO: Creating deployment "test-rollover-deployment" Feb 16 11:05:30.418: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 16 11:05:32.471: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 16 11:05:32.506: INFO: Ensure that both replica sets have 1 created replica Feb 16 11:05:32.525: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 16 11:05:32.625: INFO: Updating deployment test-rollover-deployment Feb 16 11:05:32.626: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 16 11:05:34.757: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 16 11:05:34.768: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 16 11:05:34.782: INFO: all replica sets need to contain the pod-template-hash label Feb 16 11:05:34.782: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447934, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 16 11:05:37.089: INFO: all replica sets need to contain the pod-template-hash label Feb 16 11:05:37.089: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447934, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 16 11:05:38.829: INFO: all replica sets need to contain the pod-template-hash label Feb 16 11:05:38.829: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447934, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 16 11:05:42.026: INFO: all replica sets need to contain the pod-template-hash label Feb 16 11:05:42.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447934, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 16 11:05:42.926: INFO: all replica sets need to contain the pod-template-hash label Feb 16 11:05:42.927: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447934, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 16 11:05:44.795: INFO: all replica sets need to contain the pod-template-hash label Feb 16 11:05:44.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447944, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 16 11:05:46.807: INFO: all replica sets need to contain the pod-template-hash label Feb 16 11:05:46.807: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447944, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 16 11:05:48.803: INFO: all replica sets need to contain the pod-template-hash label Feb 16 11:05:48.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447944, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 16 11:05:50.810: INFO: all replica sets need to contain the pod-template-hash label Feb 16 11:05:50.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447944, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 16 11:05:52.811: INFO: all replica sets need to contain the pod-template-hash label Feb 16 11:05:52.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447944, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 16 11:05:54.814: INFO: all replica sets need to contain the pod-template-hash label Feb 16 11:05:54.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447944, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717447930, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 16 11:05:56.932: INFO: Feb 16 11:05:56.932: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 16 11:05:57.129: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-qsbxx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qsbxx/deployments/test-rollover-deployment,UID:3ea2ecd8-50ac-11ea-a994-fa163e34d433,ResourceVersion:21856325,Generation:2,CreationTimestamp:2020-02-16 11:05:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-16 11:05:30 +0000 UTC 2020-02-16 11:05:30 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-16 11:05:55 +0000 UTC 2020-02-16 11:05:30 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 16 11:05:57.148: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-qsbxx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qsbxx/replicasets/test-rollover-deployment-5b8479fdb6,UID:4009db42-50ac-11ea-a994-fa163e34d433,ResourceVersion:21856316,Generation:2,CreationTimestamp:2020-02-16 11:05:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 3ea2ecd8-50ac-11ea-a994-fa163e34d433 0xc001e14757 0xc001e14758}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 16 11:05:57.148: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 16 11:05:57.148: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-qsbxx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qsbxx/replicasets/test-rollover-controller,UID:37675edb-50ac-11ea-a994-fa163e34d433,ResourceVersion:21856324,Generation:2,CreationTimestamp:2020-02-16 11:05:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 3ea2ecd8-50ac-11ea-a994-fa163e34d433 0xc001e14287 0xc001e14288}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 16 11:05:57.149: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-qsbxx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-qsbxx/replicasets/test-rollover-deployment-58494b7559,UID:3ebbb85e-50ac-11ea-a994-fa163e34d433,ResourceVersion:21856280,Generation:2,CreationTimestamp:2020-02-16 11:05:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 3ea2ecd8-50ac-11ea-a994-fa163e34d433 0xc001e14617 0xc001e14618}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 16 11:05:57.157: INFO: Pod "test-rollover-deployment-5b8479fdb6-rk9c4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-rk9c4,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-qsbxx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-qsbxx/pods/test-rollover-deployment-5b8479fdb6-rk9c4,UID:40a7f4cb-50ac-11ea-a994-fa163e34d433,ResourceVersion:21856301,Generation:0,CreationTimestamp:2020-02-16 11:05:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 4009db42-50ac-11ea-a994-fa163e34d433 0xc001af03b7 0xc001af03b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mtdvw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mtdvw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-mtdvw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001af0420} {node.kubernetes.io/unreachable Exists NoExecute 0xc001af0440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:05:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:05:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:05:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:05:34 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-16 11:05:34 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-16 11:05:43 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://73c9cbc94b4c9e9301f35bf513b341225c37a1a0e3efc558ecb0851f28ff00e6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:05:57.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-qsbxx" for this suite. Feb 16 11:06:05.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:06:05.399: INFO: namespace: e2e-tests-deployment-qsbxx, resource: bindings, ignored listing per whitelist Feb 16 11:06:05.534: INFO: namespace e2e-tests-deployment-qsbxx deletion completed in 8.371518035s • [SLOW TEST:47.499 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:06:05.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-547436e0-50ac-11ea-aa00-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 16 11:06:07.007: INFO: Waiting up to 5m0s for pod "pod-configmaps-54758b19-50ac-11ea-aa00-0242ac110008" in namespace "e2e-tests-configmap-ckkhd" to be "success or failure" Feb 16 11:06:07.015: INFO: Pod "pod-configmaps-54758b19-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.081359ms Feb 16 11:06:09.030: INFO: Pod "pod-configmaps-54758b19-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022123273s Feb 16 11:06:12.626: INFO: Pod "pod-configmaps-54758b19-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.618831864s Feb 16 11:06:14.642: INFO: Pod "pod-configmaps-54758b19-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.634568592s Feb 16 11:06:16.664: INFO: Pod "pod-configmaps-54758b19-50ac-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.656038585s STEP: Saw pod success Feb 16 11:06:16.664: INFO: Pod "pod-configmaps-54758b19-50ac-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:06:16.710: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-54758b19-50ac-11ea-aa00-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 16 11:06:16.914: INFO: Waiting for pod pod-configmaps-54758b19-50ac-11ea-aa00-0242ac110008 to disappear Feb 16 11:06:16.929: INFO: Pod pod-configmaps-54758b19-50ac-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:06:16.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-ckkhd" for this suite. Feb 16 11:06:23.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:06:23.179: INFO: namespace: e2e-tests-configmap-ckkhd, resource: bindings, ignored listing per whitelist Feb 16 11:06:23.221: INFO: namespace e2e-tests-configmap-ckkhd deletion completed in 6.272444137s • [SLOW TEST:17.686 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:06:23.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Feb 16 11:06:23.461: INFO: Waiting up to 5m0s for pod "var-expansion-5e38b31e-50ac-11ea-aa00-0242ac110008" in namespace "e2e-tests-var-expansion-58dd7" to be "success or failure" Feb 16 11:06:23.491: INFO: Pod "var-expansion-5e38b31e-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 29.323204ms Feb 16 11:06:25.694: INFO: Pod "var-expansion-5e38b31e-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232899577s Feb 16 11:06:27.719: INFO: Pod "var-expansion-5e38b31e-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257881501s Feb 16 11:06:29.761: INFO: Pod "var-expansion-5e38b31e-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.299232171s Feb 16 11:06:32.086: INFO: Pod "var-expansion-5e38b31e-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.624714256s Feb 16 11:06:34.273: INFO: Pod "var-expansion-5e38b31e-50ac-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.812119796s STEP: Saw pod success Feb 16 11:06:34.274: INFO: Pod "var-expansion-5e38b31e-50ac-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:06:34.287: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-5e38b31e-50ac-11ea-aa00-0242ac110008 container dapi-container: STEP: delete the pod Feb 16 11:06:34.478: INFO: Waiting for pod var-expansion-5e38b31e-50ac-11ea-aa00-0242ac110008 to disappear Feb 16 11:06:34.485: INFO: Pod var-expansion-5e38b31e-50ac-11ea-aa00-0242ac110008 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:06:34.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-58dd7" for this suite. Feb 16 11:06:40.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:06:40.755: INFO: namespace: e2e-tests-var-expansion-58dd7, resource: bindings, ignored listing per whitelist Feb 16 11:06:40.802: INFO: namespace e2e-tests-var-expansion-58dd7 deletion completed in 6.306871622s • [SLOW TEST:17.581 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:06:40.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 16 11:06:41.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 16 11:06:41.288: INFO: stderr: "" Feb 16 11:06:41.288: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:06:41.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7bjjv" for this suite. Feb 16 11:06:47.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:06:47.526: INFO: namespace: e2e-tests-kubectl-7bjjv, resource: bindings, ignored listing per whitelist Feb 16 11:06:47.543: INFO: namespace e2e-tests-kubectl-7bjjv deletion completed in 6.232802707s • [SLOW TEST:6.741 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:06:47.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 16 11:07:09.946: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 16 11:07:09.969: INFO: Pod pod-with-prestop-http-hook still exists Feb 16 11:07:11.969: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 16 11:07:12.117: INFO: Pod pod-with-prestop-http-hook still exists Feb 16 11:07:13.969: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 16 11:07:13.981: INFO: Pod pod-with-prestop-http-hook still exists Feb 16 11:07:15.970: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 16 11:07:15.989: INFO: Pod pod-with-prestop-http-hook still exists Feb 16 11:07:17.969: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 16 11:07:17.989: INFO: Pod pod-with-prestop-http-hook still exists Feb 16 11:07:19.969: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 16 11:07:19.984: INFO: Pod pod-with-prestop-http-hook still exists Feb 16 11:07:21.969: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 16 11:07:22.001: INFO: Pod pod-with-prestop-http-hook still exists Feb 16 11:07:23.969: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 16 11:07:23.986: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:07:24.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-czpmx" for this suite. Feb 16 11:07:46.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:07:46.142: INFO: namespace: e2e-tests-container-lifecycle-hook-czpmx, resource: bindings, ignored listing per whitelist Feb 16 11:07:46.283: INFO: namespace e2e-tests-container-lifecycle-hook-czpmx deletion completed in 22.260264708s • [SLOW TEST:58.739 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:07:46.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 16 11:07:46.414: INFO: Waiting up to 5m0s for pod "pod-8fb7b664-50ac-11ea-aa00-0242ac110008" in namespace "e2e-tests-emptydir-5nskv" to be "success or failure" Feb 16 11:07:46.427: INFO: Pod "pod-8fb7b664-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.241451ms Feb 16 11:07:48.513: INFO: Pod "pod-8fb7b664-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098543666s Feb 16 11:07:50.550: INFO: Pod "pod-8fb7b664-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135915281s Feb 16 11:07:53.479: INFO: Pod "pod-8fb7b664-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.06443675s Feb 16 11:07:55.513: INFO: Pod "pod-8fb7b664-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.099074502s Feb 16 11:07:57.526: INFO: Pod "pod-8fb7b664-50ac-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.111888514s STEP: Saw pod success Feb 16 11:07:57.526: INFO: Pod "pod-8fb7b664-50ac-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:07:57.534: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-8fb7b664-50ac-11ea-aa00-0242ac110008 container test-container: STEP: delete the pod Feb 16 11:07:58.355: INFO: Waiting for pod pod-8fb7b664-50ac-11ea-aa00-0242ac110008 to disappear Feb 16 11:07:58.393: INFO: Pod pod-8fb7b664-50ac-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:07:58.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5nskv" for this suite. Feb 16 11:08:04.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:08:04.638: INFO: namespace: e2e-tests-emptydir-5nskv, resource: bindings, ignored listing per whitelist Feb 16 11:08:04.768: INFO: namespace e2e-tests-emptydir-5nskv deletion completed in 6.365597161s • [SLOW TEST:18.484 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:08:04.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-ztngb A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-ztngb;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-ztngb A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-ztngb;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-ztngb.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-ztngb.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-ztngb.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-ztngb.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-ztngb.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-ztngb.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-ztngb.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ztngb.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-ztngb.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-ztngb.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-ztngb.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-ztngb.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-ztngb.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 56.73.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.73.56_udp@PTR;check="$$(dig +tcp +noall +answer +search 56.73.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.73.56_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-ztngb A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-ztngb;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-ztngb A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-ztngb;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-ztngb.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-ztngb.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-ztngb.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-ztngb.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-ztngb.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-ztngb.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-ztngb.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ztngb.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-ztngb.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-ztngb.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-ztngb.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-ztngb.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-ztngb.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 56.73.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.73.56_udp@PTR;check="$$(dig +tcp +noall +answer +search 56.73.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.73.56_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 16 11:08:19.167: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.174: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.183: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-ztngb from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.190: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-ztngb from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.197: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-ztngb.svc from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.204: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-ztngb.svc from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.210: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-ztngb.svc from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.219: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ztngb.svc from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.225: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-ztngb.svc from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.232: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-ztngb.svc from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.237: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.245: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.282: INFO: Unable to read 10.98.73.56_udp@PTR from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.293: INFO: Unable to read 10.98.73.56_tcp@PTR from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.313: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.326: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.333: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-ztngb from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.387: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-ztngb from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.395: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-ztngb.svc from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.409: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-ztngb.svc from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.417: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-ztngb.svc from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.423: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ztngb.svc from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.431: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-ztngb.svc from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.436: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-ztngb.svc from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.444: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.451: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.456: INFO: Unable to read 10.98.73.56_udp@PTR from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.461: INFO: Unable to read 10.98.73.56_tcp@PTR from pod e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008) Feb 16 11:08:19.461: INFO: Lookups using e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-ztngb wheezy_tcp@dns-test-service.e2e-tests-dns-ztngb wheezy_udp@dns-test-service.e2e-tests-dns-ztngb.svc wheezy_tcp@dns-test-service.e2e-tests-dns-ztngb.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-ztngb.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ztngb.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-ztngb.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-ztngb.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.98.73.56_udp@PTR 10.98.73.56_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-ztngb jessie_tcp@dns-test-service.e2e-tests-dns-ztngb jessie_udp@dns-test-service.e2e-tests-dns-ztngb.svc jessie_tcp@dns-test-service.e2e-tests-dns-ztngb.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-ztngb.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ztngb.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-ztngb.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-ztngb.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.98.73.56_udp@PTR 10.98.73.56_tcp@PTR] Feb 16 11:08:24.896: INFO: DNS probes using e2e-tests-dns-ztngb/dns-test-9ad277ec-50ac-11ea-aa00-0242ac110008 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:08:25.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-ztngb" for this suite. Feb 16 11:08:33.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:08:33.565: INFO: namespace: e2e-tests-dns-ztngb, resource: bindings, ignored listing per whitelist Feb 16 11:08:33.582: INFO: namespace e2e-tests-dns-ztngb deletion completed in 8.231101508s • [SLOW TEST:28.814 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:08:33.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Feb 16 11:08:34.161: INFO: Waiting up to 5m0s for pod "client-containers-ac2277fd-50ac-11ea-aa00-0242ac110008" in namespace "e2e-tests-containers-w8hkx" to be "success or failure" Feb 16 11:08:34.183: INFO: Pod "client-containers-ac2277fd-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 21.203367ms Feb 16 11:08:36.285: INFO: Pod "client-containers-ac2277fd-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123704672s Feb 16 11:08:38.308: INFO: Pod "client-containers-ac2277fd-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146834496s Feb 16 11:08:40.331: INFO: Pod "client-containers-ac2277fd-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169596467s Feb 16 11:08:42.348: INFO: Pod "client-containers-ac2277fd-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.186801281s Feb 16 11:08:44.366: INFO: Pod "client-containers-ac2277fd-50ac-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.204705754s Feb 16 11:08:46.604: INFO: Pod "client-containers-ac2277fd-50ac-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.442915267s STEP: Saw pod success Feb 16 11:08:46.605: INFO: Pod "client-containers-ac2277fd-50ac-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:08:46.617: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-ac2277fd-50ac-11ea-aa00-0242ac110008 container test-container: STEP: delete the pod Feb 16 11:08:46.868: INFO: Waiting for pod client-containers-ac2277fd-50ac-11ea-aa00-0242ac110008 to disappear Feb 16 11:08:46.893: INFO: Pod client-containers-ac2277fd-50ac-11ea-aa00-0242ac110008 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:08:46.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-w8hkx" for this suite. Feb 16 11:08:52.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:08:53.169: INFO: namespace: e2e-tests-containers-w8hkx, resource: bindings, ignored listing per whitelist Feb 16 11:08:53.173: INFO: namespace e2e-tests-containers-w8hkx deletion completed in 6.25953792s • [SLOW TEST:19.591 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:08:53.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-p7cd STEP: Creating a pod to test atomic-volume-subpath Feb 16 11:08:53.456: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-p7cd" in namespace "e2e-tests-subpath-74cqf" to be "success or failure" Feb 16 11:08:53.520: INFO: Pod "pod-subpath-test-configmap-p7cd": Phase="Pending", Reason="", readiness=false. Elapsed: 64.441233ms Feb 16 11:08:55.535: INFO: Pod "pod-subpath-test-configmap-p7cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079424666s Feb 16 11:08:57.553: INFO: Pod "pod-subpath-test-configmap-p7cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097483004s Feb 16 11:08:59.653: INFO: Pod "pod-subpath-test-configmap-p7cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196870225s Feb 16 11:09:01.668: INFO: Pod "pod-subpath-test-configmap-p7cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.212218936s Feb 16 11:09:03.685: INFO: Pod "pod-subpath-test-configmap-p7cd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.228718882s Feb 16 11:09:05.699: INFO: Pod "pod-subpath-test-configmap-p7cd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.243542582s Feb 16 11:09:08.098: INFO: Pod "pod-subpath-test-configmap-p7cd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.642176633s Feb 16 11:09:10.117: INFO: Pod "pod-subpath-test-configmap-p7cd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.660912909s Feb 16 11:09:12.133: INFO: Pod "pod-subpath-test-configmap-p7cd": Phase="Running", Reason="", readiness=false. Elapsed: 18.676932969s Feb 16 11:09:14.152: INFO: Pod "pod-subpath-test-configmap-p7cd": Phase="Running", Reason="", readiness=false. Elapsed: 20.69582978s Feb 16 11:09:16.164: INFO: Pod "pod-subpath-test-configmap-p7cd": Phase="Running", Reason="", readiness=false. Elapsed: 22.70794449s Feb 16 11:09:18.179: INFO: Pod "pod-subpath-test-configmap-p7cd": Phase="Running", Reason="", readiness=false. Elapsed: 24.722919783s Feb 16 11:09:20.193: INFO: Pod "pod-subpath-test-configmap-p7cd": Phase="Running", Reason="", readiness=false. Elapsed: 26.737506327s Feb 16 11:09:22.209: INFO: Pod "pod-subpath-test-configmap-p7cd": Phase="Running", Reason="", readiness=false. Elapsed: 28.752688312s Feb 16 11:09:24.219: INFO: Pod "pod-subpath-test-configmap-p7cd": Phase="Running", Reason="", readiness=false. Elapsed: 30.763438126s Feb 16 11:09:26.233: INFO: Pod "pod-subpath-test-configmap-p7cd": Phase="Running", Reason="", readiness=false. Elapsed: 32.777131023s Feb 16 11:09:28.246: INFO: Pod "pod-subpath-test-configmap-p7cd": Phase="Running", Reason="", readiness=false. Elapsed: 34.790176011s Feb 16 11:09:30.687: INFO: Pod "pod-subpath-test-configmap-p7cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.230723731s STEP: Saw pod success Feb 16 11:09:30.687: INFO: Pod "pod-subpath-test-configmap-p7cd" satisfied condition "success or failure" Feb 16 11:09:30.707: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-p7cd container test-container-subpath-configmap-p7cd: STEP: delete the pod Feb 16 11:09:31.031: INFO: Waiting for pod pod-subpath-test-configmap-p7cd to disappear Feb 16 11:09:31.052: INFO: Pod pod-subpath-test-configmap-p7cd no longer exists STEP: Deleting pod pod-subpath-test-configmap-p7cd Feb 16 11:09:31.052: INFO: Deleting pod "pod-subpath-test-configmap-p7cd" in namespace "e2e-tests-subpath-74cqf" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:09:31.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-74cqf" for this suite. Feb 16 11:09:37.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:09:37.185: INFO: namespace: e2e-tests-subpath-74cqf, resource: bindings, ignored listing per whitelist Feb 16 11:09:37.322: INFO: namespace e2e-tests-subpath-74cqf deletion completed in 6.259298968s • [SLOW TEST:44.148 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:09:37.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 16 11:09:37.578: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Feb 16 11:09:37.587: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-kkpc7/daemonsets","resourceVersion":"21856851"},"items":null} Feb 16 11:09:37.599: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-kkpc7/pods","resourceVersion":"21856851"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:09:37.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-kkpc7" for this suite. Feb 16 11:09:45.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:09:45.865: INFO: namespace: e2e-tests-daemonsets-kkpc7, resource: bindings, ignored listing per whitelist Feb 16 11:09:45.914: INFO: namespace e2e-tests-daemonsets-kkpc7 deletion completed in 8.226735283s S [SKIPPING] [8.592 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 16 11:09:37.578: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:09:45.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-bng8l Feb 16 11:09:58.142: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-bng8l STEP: checking the pod's current state and verifying that restartCount is present Feb 16 11:09:58.149: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:13:59.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-bng8l" for this suite. Feb 16 11:14:07.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:14:07.486: INFO: namespace: e2e-tests-container-probe-bng8l, resource: bindings, ignored listing per whitelist Feb 16 11:14:07.528: INFO: namespace e2e-tests-container-probe-bng8l deletion completed in 8.233565558s • [SLOW TEST:261.613 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:14:07.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-730b00bb-50ad-11ea-aa00-0242ac110008 STEP: Creating a pod to test consume secrets Feb 16 11:14:07.832: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-730c7daf-50ad-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-rw2zn" to be "success or failure" Feb 16 11:14:07.859: INFO: Pod "pod-projected-secrets-730c7daf-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 27.441446ms Feb 16 11:14:10.135: INFO: Pod "pod-projected-secrets-730c7daf-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303187093s Feb 16 11:14:12.166: INFO: Pod "pod-projected-secrets-730c7daf-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333672843s Feb 16 11:14:14.409: INFO: Pod "pod-projected-secrets-730c7daf-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.577227749s Feb 16 11:14:16.438: INFO: Pod "pod-projected-secrets-730c7daf-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.605989703s Feb 16 11:14:18.474: INFO: Pod "pod-projected-secrets-730c7daf-50ad-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.642462217s STEP: Saw pod success Feb 16 11:14:18.475: INFO: Pod "pod-projected-secrets-730c7daf-50ad-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:14:18.487: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-730c7daf-50ad-11ea-aa00-0242ac110008 container projected-secret-volume-test: STEP: delete the pod Feb 16 11:14:19.731: INFO: Waiting for pod pod-projected-secrets-730c7daf-50ad-11ea-aa00-0242ac110008 to disappear Feb 16 11:14:19.790: INFO: Pod pod-projected-secrets-730c7daf-50ad-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:14:19.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rw2zn" for this suite. Feb 16 11:14:26.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:14:26.371: INFO: namespace: e2e-tests-projected-rw2zn, resource: bindings, ignored listing per whitelist Feb 16 11:14:26.398: INFO: namespace e2e-tests-projected-rw2zn deletion completed in 6.48390555s • [SLOW TEST:18.871 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:14:26.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 16 11:14:26.796: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-8fjtl,SelfLink:/api/v1/namespaces/e2e-tests-watch-8fjtl/configmaps/e2e-watch-test-label-changed,UID:7e4d89a7-50ad-11ea-a994-fa163e34d433,ResourceVersion:21857251,Generation:0,CreationTimestamp:2020-02-16 11:14:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 16 11:14:26.796: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-8fjtl,SelfLink:/api/v1/namespaces/e2e-tests-watch-8fjtl/configmaps/e2e-watch-test-label-changed,UID:7e4d89a7-50ad-11ea-a994-fa163e34d433,ResourceVersion:21857252,Generation:0,CreationTimestamp:2020-02-16 11:14:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 16 11:14:26.796: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-8fjtl,SelfLink:/api/v1/namespaces/e2e-tests-watch-8fjtl/configmaps/e2e-watch-test-label-changed,UID:7e4d89a7-50ad-11ea-a994-fa163e34d433,ResourceVersion:21857253,Generation:0,CreationTimestamp:2020-02-16 11:14:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 16 11:14:36.906: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-8fjtl,SelfLink:/api/v1/namespaces/e2e-tests-watch-8fjtl/configmaps/e2e-watch-test-label-changed,UID:7e4d89a7-50ad-11ea-a994-fa163e34d433,ResourceVersion:21857267,Generation:0,CreationTimestamp:2020-02-16 11:14:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 16 11:14:36.906: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-8fjtl,SelfLink:/api/v1/namespaces/e2e-tests-watch-8fjtl/configmaps/e2e-watch-test-label-changed,UID:7e4d89a7-50ad-11ea-a994-fa163e34d433,ResourceVersion:21857268,Generation:0,CreationTimestamp:2020-02-16 11:14:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Feb 16 11:14:36.907: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-8fjtl,SelfLink:/api/v1/namespaces/e2e-tests-watch-8fjtl/configmaps/e2e-watch-test-label-changed,UID:7e4d89a7-50ad-11ea-a994-fa163e34d433,ResourceVersion:21857269,Generation:0,CreationTimestamp:2020-02-16 11:14:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:14:36.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-8fjtl" for this suite. Feb 16 11:14:42.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:14:43.017: INFO: namespace: e2e-tests-watch-8fjtl, resource: bindings, ignored listing per whitelist Feb 16 11:14:43.103: INFO: namespace e2e-tests-watch-8fjtl deletion completed in 6.184533229s • [SLOW TEST:16.704 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:14:43.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-8831fd5f-50ad-11ea-aa00-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 16 11:14:43.316: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8832b5f1-50ad-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-rkzhc" to be "success or failure" Feb 16 11:14:43.325: INFO: Pod "pod-projected-configmaps-8832b5f1-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.648246ms Feb 16 11:14:45.458: INFO: Pod "pod-projected-configmaps-8832b5f1-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14190877s Feb 16 11:14:47.469: INFO: Pod "pod-projected-configmaps-8832b5f1-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152173037s Feb 16 11:14:49.485: INFO: Pod "pod-projected-configmaps-8832b5f1-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168079954s Feb 16 11:14:51.528: INFO: Pod "pod-projected-configmaps-8832b5f1-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.211754961s Feb 16 11:14:53.546: INFO: Pod "pod-projected-configmaps-8832b5f1-50ad-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.22928485s STEP: Saw pod success Feb 16 11:14:53.546: INFO: Pod "pod-projected-configmaps-8832b5f1-50ad-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:14:53.566: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-8832b5f1-50ad-11ea-aa00-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 16 11:14:53.988: INFO: Waiting for pod pod-projected-configmaps-8832b5f1-50ad-11ea-aa00-0242ac110008 to disappear Feb 16 11:14:53.999: INFO: Pod pod-projected-configmaps-8832b5f1-50ad-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:14:53.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rkzhc" for this suite. Feb 16 11:15:00.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:15:00.258: INFO: namespace: e2e-tests-projected-rkzhc, resource: bindings, ignored listing per whitelist Feb 16 11:15:00.596: INFO: namespace e2e-tests-projected-rkzhc deletion completed in 6.58828727s • [SLOW TEST:17.492 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:15:00.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 16 11:15:21.080: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 16 11:15:21.121: INFO: Pod pod-with-poststart-http-hook still exists Feb 16 11:15:23.122: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 16 11:15:23.142: INFO: Pod pod-with-poststart-http-hook still exists Feb 16 11:15:25.122: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 16 11:15:25.140: INFO: Pod pod-with-poststart-http-hook still exists Feb 16 11:15:27.122: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 16 11:15:27.138: INFO: Pod pod-with-poststart-http-hook still exists Feb 16 11:15:29.122: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 16 11:15:29.141: INFO: Pod pod-with-poststart-http-hook still exists Feb 16 11:15:31.122: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 16 11:15:31.138: INFO: Pod pod-with-poststart-http-hook still exists Feb 16 11:15:33.122: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 16 11:15:33.142: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:15:33.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-2xtzn" for this suite. Feb 16 11:15:59.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:15:59.350: INFO: namespace: e2e-tests-container-lifecycle-hook-2xtzn, resource: bindings, ignored listing per whitelist Feb 16 11:15:59.390: INFO: namespace e2e-tests-container-lifecycle-hook-2xtzn deletion completed in 26.236558561s • [SLOW TEST:58.794 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:15:59.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 16 11:15:59.704: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5be2deb-50ad-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-gth6x" to be "success or failure" Feb 16 11:15:59.715: INFO: Pod "downwardapi-volume-b5be2deb-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.363188ms Feb 16 11:16:02.055: INFO: Pod "downwardapi-volume-b5be2deb-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.351079764s Feb 16 11:16:04.093: INFO: Pod "downwardapi-volume-b5be2deb-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.388668649s Feb 16 11:16:06.105: INFO: Pod "downwardapi-volume-b5be2deb-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.401339361s Feb 16 11:16:08.116: INFO: Pod "downwardapi-volume-b5be2deb-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.412210574s Feb 16 11:16:10.138: INFO: Pod "downwardapi-volume-b5be2deb-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.433834628s Feb 16 11:16:12.559: INFO: Pod "downwardapi-volume-b5be2deb-50ad-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.854635439s STEP: Saw pod success Feb 16 11:16:12.559: INFO: Pod "downwardapi-volume-b5be2deb-50ad-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:16:12.803: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b5be2deb-50ad-11ea-aa00-0242ac110008 container client-container: STEP: delete the pod Feb 16 11:16:12.943: INFO: Waiting for pod downwardapi-volume-b5be2deb-50ad-11ea-aa00-0242ac110008 to disappear Feb 16 11:16:12.955: INFO: Pod downwardapi-volume-b5be2deb-50ad-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:16:12.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gth6x" for this suite. Feb 16 11:16:19.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:16:19.262: INFO: namespace: e2e-tests-projected-gth6x, resource: bindings, ignored listing per whitelist Feb 16 11:16:19.416: INFO: namespace e2e-tests-projected-gth6x deletion completed in 6.454151032s • [SLOW TEST:20.025 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:16:19.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 16 11:16:19.791: INFO: Waiting up to 5m0s for pod "pod-c1b547f4-50ad-11ea-aa00-0242ac110008" in namespace "e2e-tests-emptydir-cn4cx" to be "success or failure" Feb 16 11:16:19.805: INFO: Pod "pod-c1b547f4-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.45127ms Feb 16 11:16:21.982: INFO: Pod "pod-c1b547f4-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190887607s Feb 16 11:16:23.997: INFO: Pod "pod-c1b547f4-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205164551s Feb 16 11:16:26.302: INFO: Pod "pod-c1b547f4-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.510598608s Feb 16 11:16:28.316: INFO: Pod "pod-c1b547f4-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.524201088s Feb 16 11:16:30.326: INFO: Pod "pod-c1b547f4-50ad-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.534929041s STEP: Saw pod success Feb 16 11:16:30.327: INFO: Pod "pod-c1b547f4-50ad-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:16:30.330: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c1b547f4-50ad-11ea-aa00-0242ac110008 container test-container: STEP: delete the pod Feb 16 11:16:31.210: INFO: Waiting for pod pod-c1b547f4-50ad-11ea-aa00-0242ac110008 to disappear Feb 16 11:16:31.515: INFO: Pod pod-c1b547f4-50ad-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:16:31.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-cn4cx" for this suite. Feb 16 11:16:37.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:16:37.633: INFO: namespace: e2e-tests-emptydir-cn4cx, resource: bindings, ignored listing per whitelist Feb 16 11:16:37.917: INFO: namespace e2e-tests-emptydir-cn4cx deletion completed in 6.381613692s • [SLOW TEST:18.500 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:16:37.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 16 11:16:38.173: INFO: Waiting up to 5m0s for pod "pod-cca46677-50ad-11ea-aa00-0242ac110008" in namespace "e2e-tests-emptydir-572rt" to be "success or failure" Feb 16 11:16:38.185: INFO: Pod "pod-cca46677-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.526474ms Feb 16 11:16:40.359: INFO: Pod "pod-cca46677-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185762821s Feb 16 11:16:42.382: INFO: Pod "pod-cca46677-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208093958s Feb 16 11:16:44.498: INFO: Pod "pod-cca46677-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.324331057s Feb 16 11:16:46.589: INFO: Pod "pod-cca46677-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.415155201s Feb 16 11:16:48.769: INFO: Pod "pod-cca46677-50ad-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.595304608s STEP: Saw pod success Feb 16 11:16:48.769: INFO: Pod "pod-cca46677-50ad-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:16:48.776: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-cca46677-50ad-11ea-aa00-0242ac110008 container test-container: STEP: delete the pod Feb 16 11:16:48.856: INFO: Waiting for pod pod-cca46677-50ad-11ea-aa00-0242ac110008 to disappear Feb 16 11:16:48.916: INFO: Pod pod-cca46677-50ad-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:16:48.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-572rt" for this suite. Feb 16 11:16:54.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:16:55.190: INFO: namespace: e2e-tests-emptydir-572rt, resource: bindings, ignored listing per whitelist Feb 16 11:16:55.223: INFO: namespace e2e-tests-emptydir-572rt deletion completed in 6.29699489s • [SLOW TEST:17.305 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:16:55.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Feb 16 11:16:55.355: INFO: namespace e2e-tests-kubectl-fzhj2 Feb 16 11:16:55.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fzhj2' Feb 16 11:16:57.911: INFO: stderr: "" Feb 16 11:16:57.911: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 16 11:16:59.722: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:16:59.722: INFO: Found 0 / 1 Feb 16 11:17:00.078: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:17:00.079: INFO: Found 0 / 1 Feb 16 11:17:00.951: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:17:00.951: INFO: Found 0 / 1 Feb 16 11:17:01.933: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:17:01.933: INFO: Found 0 / 1 Feb 16 11:17:02.923: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:17:02.923: INFO: Found 0 / 1 Feb 16 11:17:04.994: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:17:04.994: INFO: Found 0 / 1 Feb 16 11:17:05.964: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:17:05.964: INFO: Found 0 / 1 Feb 16 11:17:06.929: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:17:06.929: INFO: Found 0 / 1 Feb 16 11:17:07.929: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:17:07.929: INFO: Found 1 / 1 Feb 16 11:17:07.929: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 16 11:17:07.936: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:17:07.936: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 16 11:17:07.936: INFO: wait on redis-master startup in e2e-tests-kubectl-fzhj2 Feb 16 11:17:07.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-62p2w redis-master --namespace=e2e-tests-kubectl-fzhj2' Feb 16 11:17:08.119: INFO: stderr: "" Feb 16 11:17:08.119: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 16 Feb 11:17:06.953 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 16 Feb 11:17:06.953 # Server started, Redis version 3.2.12\n1:M 16 Feb 11:17:06.954 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 16 Feb 11:17:06.954 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Feb 16 11:17:08.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-fzhj2' Feb 16 11:17:08.410: INFO: stderr: "" Feb 16 11:17:08.410: INFO: stdout: "service/rm2 exposed\n" Feb 16 11:17:08.417: INFO: Service rm2 in namespace e2e-tests-kubectl-fzhj2 found. STEP: exposing service Feb 16 11:17:10.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-fzhj2' Feb 16 11:17:10.728: INFO: stderr: "" Feb 16 11:17:10.729: INFO: stdout: "service/rm3 exposed\n" Feb 16 11:17:10.741: INFO: Service rm3 in namespace e2e-tests-kubectl-fzhj2 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:17:12.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fzhj2" for this suite. Feb 16 11:17:36.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:17:36.911: INFO: namespace: e2e-tests-kubectl-fzhj2, resource: bindings, ignored listing per whitelist Feb 16 11:17:36.952: INFO: namespace e2e-tests-kubectl-fzhj2 deletion completed in 24.188097334s • [SLOW TEST:41.730 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:17:36.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-efda2752-50ad-11ea-aa00-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 16 11:17:37.227: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-efdb89d1-50ad-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-xfplk" to be "success or failure" Feb 16 11:17:37.319: INFO: Pod "pod-projected-configmaps-efdb89d1-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 91.09852ms Feb 16 11:17:39.335: INFO: Pod "pod-projected-configmaps-efdb89d1-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107841854s Feb 16 11:17:41.357: INFO: Pod "pod-projected-configmaps-efdb89d1-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129325526s Feb 16 11:17:43.978: INFO: Pod "pod-projected-configmaps-efdb89d1-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.750136021s Feb 16 11:17:46.052: INFO: Pod "pod-projected-configmaps-efdb89d1-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.824582305s Feb 16 11:17:48.079: INFO: Pod "pod-projected-configmaps-efdb89d1-50ad-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.851432256s STEP: Saw pod success Feb 16 11:17:48.079: INFO: Pod "pod-projected-configmaps-efdb89d1-50ad-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:17:48.091: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-efdb89d1-50ad-11ea-aa00-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 16 11:17:48.197: INFO: Waiting for pod pod-projected-configmaps-efdb89d1-50ad-11ea-aa00-0242ac110008 to disappear Feb 16 11:17:48.203: INFO: Pod pod-projected-configmaps-efdb89d1-50ad-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:17:48.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xfplk" for this suite. Feb 16 11:17:56.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:17:56.268: INFO: namespace: e2e-tests-projected-xfplk, resource: bindings, ignored listing per whitelist Feb 16 11:17:56.701: INFO: namespace e2e-tests-projected-xfplk deletion completed in 8.491808289s • [SLOW TEST:19.748 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:17:56.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 16 11:17:56.892: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb8e4381-50ad-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-sdt9s" to be "success or failure" Feb 16 11:17:56.911: INFO: Pod "downwardapi-volume-fb8e4381-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.227233ms Feb 16 11:17:58.928: INFO: Pod "downwardapi-volume-fb8e4381-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035613876s Feb 16 11:18:00.938: INFO: Pod "downwardapi-volume-fb8e4381-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045795389s Feb 16 11:18:02.956: INFO: Pod "downwardapi-volume-fb8e4381-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063855649s Feb 16 11:18:04.975: INFO: Pod "downwardapi-volume-fb8e4381-50ad-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083045957s Feb 16 11:18:07.042: INFO: Pod "downwardapi-volume-fb8e4381-50ad-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.150268777s STEP: Saw pod success Feb 16 11:18:07.044: INFO: Pod "downwardapi-volume-fb8e4381-50ad-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:18:07.108: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fb8e4381-50ad-11ea-aa00-0242ac110008 container client-container: STEP: delete the pod Feb 16 11:18:07.289: INFO: Waiting for pod downwardapi-volume-fb8e4381-50ad-11ea-aa00-0242ac110008 to disappear Feb 16 11:18:07.302: INFO: Pod downwardapi-volume-fb8e4381-50ad-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:18:07.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sdt9s" for this suite. Feb 16 11:18:13.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:18:13.535: INFO: namespace: e2e-tests-projected-sdt9s, resource: bindings, ignored listing per whitelist Feb 16 11:18:13.543: INFO: namespace e2e-tests-projected-sdt9s deletion completed in 6.231496949s • [SLOW TEST:16.840 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:18:13.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-05b3754b-50ae-11ea-aa00-0242ac110008 STEP: Creating a pod to test consume secrets Feb 16 11:18:13.928: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-05bcabad-50ae-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-spkcm" to be "success or failure" Feb 16 11:18:13.980: INFO: Pod "pod-projected-secrets-05bcabad-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 51.79858ms Feb 16 11:18:16.024: INFO: Pod "pod-projected-secrets-05bcabad-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096280037s Feb 16 11:18:18.037: INFO: Pod "pod-projected-secrets-05bcabad-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109275343s Feb 16 11:18:20.047: INFO: Pod "pod-projected-secrets-05bcabad-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119073036s Feb 16 11:18:22.340: INFO: Pod "pod-projected-secrets-05bcabad-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.411771337s Feb 16 11:18:24.364: INFO: Pod "pod-projected-secrets-05bcabad-50ae-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.435905713s STEP: Saw pod success Feb 16 11:18:24.364: INFO: Pod "pod-projected-secrets-05bcabad-50ae-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:18:24.377: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-05bcabad-50ae-11ea-aa00-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 16 11:18:24.788: INFO: Waiting for pod pod-projected-secrets-05bcabad-50ae-11ea-aa00-0242ac110008 to disappear Feb 16 11:18:24.814: INFO: Pod pod-projected-secrets-05bcabad-50ae-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:18:24.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-spkcm" for this suite. Feb 16 11:18:30.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:18:31.049: INFO: namespace: e2e-tests-projected-spkcm, resource: bindings, ignored listing per whitelist Feb 16 11:18:31.069: INFO: namespace e2e-tests-projected-spkcm deletion completed in 6.243841792s • [SLOW TEST:17.526 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:18:31.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-100d2f18-50ae-11ea-aa00-0242ac110008 STEP: Creating a pod to test consume secrets Feb 16 11:18:31.217: INFO: Waiting up to 5m0s for pod "pod-secrets-100def9d-50ae-11ea-aa00-0242ac110008" in namespace "e2e-tests-secrets-w6l54" to be "success or failure" Feb 16 11:18:31.347: INFO: Pod "pod-secrets-100def9d-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 129.83844ms Feb 16 11:18:33.361: INFO: Pod "pod-secrets-100def9d-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143203277s Feb 16 11:18:35.376: INFO: Pod "pod-secrets-100def9d-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158546359s Feb 16 11:18:37.464: INFO: Pod "pod-secrets-100def9d-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.246618995s Feb 16 11:18:39.475: INFO: Pod "pod-secrets-100def9d-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.257097456s Feb 16 11:18:41.486: INFO: Pod "pod-secrets-100def9d-50ae-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.268783186s STEP: Saw pod success Feb 16 11:18:41.486: INFO: Pod "pod-secrets-100def9d-50ae-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:18:41.493: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-100def9d-50ae-11ea-aa00-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 16 11:18:41.931: INFO: Waiting for pod pod-secrets-100def9d-50ae-11ea-aa00-0242ac110008 to disappear Feb 16 11:18:42.208: INFO: Pod pod-secrets-100def9d-50ae-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:18:42.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-w6l54" for this suite. Feb 16 11:18:48.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:18:48.412: INFO: namespace: e2e-tests-secrets-w6l54, resource: bindings, ignored listing per whitelist Feb 16 11:18:48.642: INFO: namespace e2e-tests-secrets-w6l54 deletion completed in 6.392952838s • [SLOW TEST:17.572 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:18:48.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Feb 16 11:18:48.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xjkk4' Feb 16 11:18:49.149: INFO: stderr: "" Feb 16 11:18:49.149: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Feb 16 11:18:50.164: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:18:50.164: INFO: Found 0 / 1 Feb 16 11:18:51.344: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:18:51.345: INFO: Found 0 / 1 Feb 16 11:18:52.176: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:18:52.176: INFO: Found 0 / 1 Feb 16 11:18:53.164: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:18:53.164: INFO: Found 0 / 1 Feb 16 11:18:54.169: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:18:54.169: INFO: Found 0 / 1 Feb 16 11:18:55.907: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:18:55.907: INFO: Found 0 / 1 Feb 16 11:18:56.277: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:18:56.277: INFO: Found 0 / 1 Feb 16 11:18:57.161: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:18:57.161: INFO: Found 0 / 1 Feb 16 11:18:58.167: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:18:58.167: INFO: Found 0 / 1 Feb 16 11:18:59.349: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:18:59.350: INFO: Found 1 / 1 Feb 16 11:18:59.350: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 16 11:18:59.391: INFO: Selector matched 1 pods for map[app:redis] Feb 16 11:18:59.391: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Feb 16 11:18:59.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-kbgk8 redis-master --namespace=e2e-tests-kubectl-xjkk4' Feb 16 11:18:59.561: INFO: stderr: "" Feb 16 11:18:59.561: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 16 Feb 11:18:57.858 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 16 Feb 11:18:57.858 # Server started, Redis version 3.2.12\n1:M 16 Feb 11:18:57.858 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 16 Feb 11:18:57.858 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Feb 16 11:18:59.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-kbgk8 redis-master --namespace=e2e-tests-kubectl-xjkk4 --tail=1' Feb 16 11:18:59.708: INFO: stderr: "" Feb 16 11:18:59.708: INFO: stdout: "1:M 16 Feb 11:18:57.858 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Feb 16 11:18:59.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-kbgk8 redis-master --namespace=e2e-tests-kubectl-xjkk4 --limit-bytes=1' Feb 16 11:18:59.856: INFO: stderr: "" Feb 16 11:18:59.856: INFO: stdout: " " STEP: exposing timestamps Feb 16 11:18:59.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-kbgk8 redis-master --namespace=e2e-tests-kubectl-xjkk4 --tail=1 --timestamps' Feb 16 11:19:00.013: INFO: stderr: "" Feb 16 11:19:00.013: INFO: stdout: "2020-02-16T11:18:57.861432014Z 1:M 16 Feb 11:18:57.858 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Feb 16 11:19:02.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-kbgk8 redis-master --namespace=e2e-tests-kubectl-xjkk4 --since=1s' Feb 16 11:19:02.809: INFO: stderr: "" Feb 16 11:19:02.809: INFO: stdout: "" Feb 16 11:19:02.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-kbgk8 redis-master --namespace=e2e-tests-kubectl-xjkk4 --since=24h' Feb 16 11:19:02.987: INFO: stderr: "" Feb 16 11:19:02.987: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 16 Feb 11:18:57.858 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 16 Feb 11:18:57.858 # Server started, Redis version 3.2.12\n1:M 16 Feb 11:18:57.858 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 16 Feb 11:18:57.858 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Feb 16 11:19:02.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xjkk4' Feb 16 11:19:03.097: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 16 11:19:03.097: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Feb 16 11:19:03.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-xjkk4' Feb 16 11:19:03.219: INFO: stderr: "No resources found.\n" Feb 16 11:19:03.219: INFO: stdout: "" Feb 16 11:19:03.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-xjkk4 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 16 11:19:03.339: INFO: stderr: "" Feb 16 11:19:03.340: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:19:03.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xjkk4" for this suite. Feb 16 11:19:27.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:19:27.492: INFO: namespace: e2e-tests-kubectl-xjkk4, resource: bindings, ignored listing per whitelist Feb 16 11:19:27.565: INFO: namespace e2e-tests-kubectl-xjkk4 deletion completed in 24.210734124s • [SLOW TEST:38.923 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:19:27.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-31c8385c-50ae-11ea-aa00-0242ac110008 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:19:43.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qqnlz" for this suite. Feb 16 11:20:08.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:20:08.139: INFO: namespace: e2e-tests-configmap-qqnlz, resource: bindings, ignored listing per whitelist Feb 16 11:20:08.221: INFO: namespace e2e-tests-configmap-qqnlz deletion completed in 24.261741205s • [SLOW TEST:40.656 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:20:08.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-4a015b12-50ae-11ea-aa00-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 16 11:20:08.548: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4a03f226-50ae-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-r5fxc" to be "success or failure" Feb 16 11:20:08.623: INFO: Pod "pod-projected-configmaps-4a03f226-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 74.526191ms Feb 16 11:20:10.657: INFO: Pod "pod-projected-configmaps-4a03f226-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109443675s Feb 16 11:20:12.667: INFO: Pod "pod-projected-configmaps-4a03f226-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11884357s Feb 16 11:20:14.702: INFO: Pod "pod-projected-configmaps-4a03f226-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154347605s Feb 16 11:20:17.215: INFO: Pod "pod-projected-configmaps-4a03f226-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.667142709s Feb 16 11:20:19.230: INFO: Pod "pod-projected-configmaps-4a03f226-50ae-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.681791042s STEP: Saw pod success Feb 16 11:20:19.230: INFO: Pod "pod-projected-configmaps-4a03f226-50ae-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:20:19.237: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-4a03f226-50ae-11ea-aa00-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 16 11:20:19.943: INFO: Waiting for pod pod-projected-configmaps-4a03f226-50ae-11ea-aa00-0242ac110008 to disappear Feb 16 11:20:19.963: INFO: Pod pod-projected-configmaps-4a03f226-50ae-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:20:19.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r5fxc" for this suite. Feb 16 11:20:26.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:20:26.091: INFO: namespace: e2e-tests-projected-r5fxc, resource: bindings, ignored listing per whitelist Feb 16 11:20:26.199: INFO: namespace e2e-tests-projected-r5fxc deletion completed in 6.224620006s • [SLOW TEST:17.977 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:20:26.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 16 11:20:37.425: INFO: Successfully updated pod "labelsupdate54c5cd42-50ae-11ea-aa00-0242ac110008" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:20:39.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wg26m" for this suite. Feb 16 11:21:03.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:21:04.220: INFO: namespace: e2e-tests-projected-wg26m, resource: bindings, ignored listing per whitelist Feb 16 11:21:04.266: INFO: namespace e2e-tests-projected-wg26m deletion completed in 24.683871589s • [SLOW TEST:38.066 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:21:04.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-44qz7 in namespace e2e-tests-proxy-8sv99 I0216 11:21:04.756809 9 runners.go:184] Created replication controller with name: proxy-service-44qz7, namespace: e2e-tests-proxy-8sv99, replica count: 1 I0216 11:21:05.807763 9 runners.go:184] proxy-service-44qz7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0216 11:21:06.808188 9 runners.go:184] proxy-service-44qz7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0216 11:21:07.808567 9 runners.go:184] proxy-service-44qz7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0216 11:21:08.809136 9 runners.go:184] proxy-service-44qz7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0216 11:21:09.809613 9 runners.go:184] proxy-service-44qz7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0216 11:21:10.809996 9 runners.go:184] proxy-service-44qz7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0216 11:21:11.810721 9 runners.go:184] proxy-service-44qz7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0216 11:21:12.811170 9 runners.go:184] proxy-service-44qz7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0216 11:21:13.811614 9 runners.go:184] proxy-service-44qz7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0216 11:21:14.812082 9 runners.go:184] proxy-service-44qz7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0216 11:21:15.812604 9 runners.go:184] proxy-service-44qz7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0216 11:21:16.813338 9 runners.go:184] proxy-service-44qz7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0216 11:21:17.813819 9 runners.go:184] proxy-service-44qz7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0216 11:21:18.814239 9 runners.go:184] proxy-service-44qz7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0216 11:21:19.814704 9 runners.go:184] proxy-service-44qz7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 16 11:21:19.833: INFO: setup took 15.327609704s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 16 11:21:19.880: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-8sv99/pods/proxy-service-44qz7-js7vb:162/proxy/: bar (200; 46.513835ms) Feb 16 11:21:19.880: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-8sv99/pods/http:proxy-service-44qz7-js7vb:162/proxy/: bar (200; 46.331053ms) Feb 16 11:21:19.882: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-8sv99/pods/http:proxy-service-44qz7-js7vb:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-kcsdb/secret-test-7cca393e-50ae-11ea-aa00-0242ac110008 STEP: Creating a pod to test consume secrets Feb 16 11:21:33.660: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ccb4db7-50ae-11ea-aa00-0242ac110008" in namespace "e2e-tests-secrets-kcsdb" to be "success or failure" Feb 16 11:21:33.731: INFO: Pod "pod-configmaps-7ccb4db7-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 70.935115ms Feb 16 11:21:35.748: INFO: Pod "pod-configmaps-7ccb4db7-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087701657s Feb 16 11:21:37.761: INFO: Pod "pod-configmaps-7ccb4db7-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10061907s Feb 16 11:21:39.778: INFO: Pod "pod-configmaps-7ccb4db7-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117874891s Feb 16 11:21:41.801: INFO: Pod "pod-configmaps-7ccb4db7-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.140767796s Feb 16 11:21:44.280: INFO: Pod "pod-configmaps-7ccb4db7-50ae-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.620037907s STEP: Saw pod success Feb 16 11:21:44.280: INFO: Pod "pod-configmaps-7ccb4db7-50ae-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:21:44.291: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7ccb4db7-50ae-11ea-aa00-0242ac110008 container env-test: STEP: delete the pod Feb 16 11:21:44.768: INFO: Waiting for pod pod-configmaps-7ccb4db7-50ae-11ea-aa00-0242ac110008 to disappear Feb 16 11:21:45.064: INFO: Pod pod-configmaps-7ccb4db7-50ae-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:21:45.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-kcsdb" for this suite. Feb 16 11:21:53.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:21:53.385: INFO: namespace: e2e-tests-secrets-kcsdb, resource: bindings, ignored listing per whitelist Feb 16 11:21:53.405: INFO: namespace e2e-tests-secrets-kcsdb deletion completed in 8.324200466s • [SLOW TEST:19.995 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:21:53.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 16 11:21:53.786: INFO: Waiting up to 5m0s for pod "downward-api-88c6e200-50ae-11ea-aa00-0242ac110008" in namespace "e2e-tests-downward-api-mxlwt" to be "success or failure" Feb 16 11:21:53.817: INFO: Pod "downward-api-88c6e200-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 30.621502ms Feb 16 11:21:55.873: INFO: Pod "downward-api-88c6e200-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086031164s Feb 16 11:21:57.987: INFO: Pod "downward-api-88c6e200-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200372601s Feb 16 11:22:00.458: INFO: Pod "downward-api-88c6e200-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.670917279s Feb 16 11:22:02.486: INFO: Pod "downward-api-88c6e200-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.69890443s Feb 16 11:22:04.626: INFO: Pod "downward-api-88c6e200-50ae-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.83941033s STEP: Saw pod success Feb 16 11:22:04.626: INFO: Pod "downward-api-88c6e200-50ae-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:22:04.663: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-88c6e200-50ae-11ea-aa00-0242ac110008 container dapi-container: STEP: delete the pod Feb 16 11:22:04.864: INFO: Waiting for pod downward-api-88c6e200-50ae-11ea-aa00-0242ac110008 to disappear Feb 16 11:22:04.905: INFO: Pod downward-api-88c6e200-50ae-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:22:04.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mxlwt" for this suite. Feb 16 11:22:11.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:22:11.142: INFO: namespace: e2e-tests-downward-api-mxlwt, resource: bindings, ignored listing per whitelist Feb 16 11:22:11.238: INFO: namespace e2e-tests-downward-api-mxlwt deletion completed in 6.220773679s • [SLOW TEST:17.833 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:22:11.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 16 11:22:11.438: INFO: Waiting up to 5m0s for pod "downwardapi-volume-934dcb70-50ae-11ea-aa00-0242ac110008" in namespace "e2e-tests-downward-api-jd89x" to be "success or failure" Feb 16 11:22:11.522: INFO: Pod "downwardapi-volume-934dcb70-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 84.37789ms Feb 16 11:22:13.672: INFO: Pod "downwardapi-volume-934dcb70-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233757737s Feb 16 11:22:15.709: INFO: Pod "downwardapi-volume-934dcb70-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.270523506s Feb 16 11:22:17.843: INFO: Pod "downwardapi-volume-934dcb70-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.404443954s Feb 16 11:22:19.967: INFO: Pod "downwardapi-volume-934dcb70-50ae-11ea-aa00-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 8.529360598s Feb 16 11:22:21.988: INFO: Pod "downwardapi-volume-934dcb70-50ae-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.549801455s STEP: Saw pod success Feb 16 11:22:21.988: INFO: Pod "downwardapi-volume-934dcb70-50ae-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:22:22.001: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-934dcb70-50ae-11ea-aa00-0242ac110008 container client-container: STEP: delete the pod Feb 16 11:22:23.135: INFO: Waiting for pod downwardapi-volume-934dcb70-50ae-11ea-aa00-0242ac110008 to disappear Feb 16 11:22:23.172: INFO: Pod downwardapi-volume-934dcb70-50ae-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:22:23.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jd89x" for this suite. Feb 16 11:22:29.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:22:29.329: INFO: namespace: e2e-tests-downward-api-jd89x, resource: bindings, ignored listing per whitelist Feb 16 11:22:29.410: INFO: namespace e2e-tests-downward-api-jd89x deletion completed in 6.21028013s • [SLOW TEST:18.172 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:22:29.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 16 11:22:29.627: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:22:54.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-xq7m7" for this suite. Feb 16 11:23:18.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:23:18.279: INFO: namespace: e2e-tests-init-container-xq7m7, resource: bindings, ignored listing per whitelist Feb 16 11:23:18.358: INFO: namespace e2e-tests-init-container-xq7m7 deletion completed in 24.197960971s • [SLOW TEST:48.947 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:23:18.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 16 11:23:18.668: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb60de93-50ae-11ea-aa00-0242ac110008" in namespace "e2e-tests-downward-api-svc67" to be "success or failure" Feb 16 11:23:18.703: INFO: Pod "downwardapi-volume-bb60de93-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 34.81875ms Feb 16 11:23:20.721: INFO: Pod "downwardapi-volume-bb60de93-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052975188s Feb 16 11:23:22.738: INFO: Pod "downwardapi-volume-bb60de93-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06925981s Feb 16 11:23:24.832: INFO: Pod "downwardapi-volume-bb60de93-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163744501s Feb 16 11:23:26.869: INFO: Pod "downwardapi-volume-bb60de93-50ae-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.200878682s Feb 16 11:23:28.920: INFO: Pod "downwardapi-volume-bb60de93-50ae-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.251962254s STEP: Saw pod success Feb 16 11:23:28.921: INFO: Pod "downwardapi-volume-bb60de93-50ae-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:23:28.944: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-bb60de93-50ae-11ea-aa00-0242ac110008 container client-container: STEP: delete the pod Feb 16 11:23:29.195: INFO: Waiting for pod downwardapi-volume-bb60de93-50ae-11ea-aa00-0242ac110008 to disappear Feb 16 11:23:29.209: INFO: Pod downwardapi-volume-bb60de93-50ae-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:23:29.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-svc67" for this suite. Feb 16 11:23:35.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:23:35.486: INFO: namespace: e2e-tests-downward-api-svc67, resource: bindings, ignored listing per whitelist Feb 16 11:23:35.584: INFO: namespace e2e-tests-downward-api-svc67 deletion completed in 6.364281969s • [SLOW TEST:17.226 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:23:35.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 16 11:23:35.871: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:23:52.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-9jkzg" for this suite. Feb 16 11:24:01.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:24:01.121: INFO: namespace: e2e-tests-init-container-9jkzg, resource: bindings, ignored listing per whitelist Feb 16 11:24:01.147: INFO: namespace e2e-tests-init-container-9jkzg deletion completed in 8.20053228s • [SLOW TEST:25.562 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:24:01.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 16 11:24:11.915: INFO: Successfully updated pod "pod-update-d4cecd75-50ae-11ea-aa00-0242ac110008" STEP: verifying the updated pod is in kubernetes Feb 16 11:24:11.937: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:24:11.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-49l5d" for this suite. Feb 16 11:24:36.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:24:36.036: INFO: namespace: e2e-tests-pods-49l5d, resource: bindings, ignored listing per whitelist Feb 16 11:24:36.262: INFO: namespace e2e-tests-pods-49l5d deletion completed in 24.315459065s • [SLOW TEST:35.114 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:24:36.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:24:36.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-ms9hb" for this suite. Feb 16 11:24:48.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:24:48.816: INFO: namespace: e2e-tests-pods-ms9hb, resource: bindings, ignored listing per whitelist Feb 16 11:24:48.921: INFO: namespace e2e-tests-pods-ms9hb deletion completed in 12.288077836s • [SLOW TEST:12.659 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:24:48.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-rsdv4.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-rsdv4.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-rsdv4.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-rsdv4.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-rsdv4.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-rsdv4.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 16 11:25:05.311: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.338: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.356: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.373: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.383: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.396: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.409: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-rsdv4.svc.cluster.local from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.426: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.439: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.449: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.458: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.463: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.483: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.492: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.504: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.511: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.522: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-rsdv4.svc.cluster.local from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.534: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.542: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.554: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008: the server could not find the requested resource (get pods dns-test-f14f882d-50ae-11ea-aa00-0242ac110008) Feb 16 11:25:05.554: INFO: Lookups using e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-rsdv4.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-rsdv4.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 16 11:25:10.870: INFO: DNS probes using e2e-tests-dns-rsdv4/dns-test-f14f882d-50ae-11ea-aa00-0242ac110008 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:25:10.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-rsdv4" for this suite. Feb 16 11:25:19.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:25:19.436: INFO: namespace: e2e-tests-dns-rsdv4, resource: bindings, ignored listing per whitelist Feb 16 11:25:19.462: INFO: namespace e2e-tests-dns-rsdv4 deletion completed in 8.269685688s • [SLOW TEST:30.540 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:25:19.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-0382f05f-50af-11ea-aa00-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 16 11:25:19.795: INFO: Waiting up to 5m0s for pod "pod-configmaps-0383e75f-50af-11ea-aa00-0242ac110008" in namespace "e2e-tests-configmap-9m5x7" to be "success or failure" Feb 16 11:25:19.812: INFO: Pod "pod-configmaps-0383e75f-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.268284ms Feb 16 11:25:21.827: INFO: Pod "pod-configmaps-0383e75f-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032168563s Feb 16 11:25:23.857: INFO: Pod "pod-configmaps-0383e75f-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062090617s Feb 16 11:25:27.546: INFO: Pod "pod-configmaps-0383e75f-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.750494718s Feb 16 11:25:29.563: INFO: Pod "pod-configmaps-0383e75f-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.76819818s Feb 16 11:25:31.589: INFO: Pod "pod-configmaps-0383e75f-50af-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.793563195s STEP: Saw pod success Feb 16 11:25:31.589: INFO: Pod "pod-configmaps-0383e75f-50af-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:25:31.603: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-0383e75f-50af-11ea-aa00-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 16 11:25:32.559: INFO: Waiting for pod pod-configmaps-0383e75f-50af-11ea-aa00-0242ac110008 to disappear Feb 16 11:25:32.606: INFO: Pod pod-configmaps-0383e75f-50af-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:25:32.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-9m5x7" for this suite. Feb 16 11:25:38.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:25:38.770: INFO: namespace: e2e-tests-configmap-9m5x7, resource: bindings, ignored listing per whitelist Feb 16 11:25:38.868: INFO: namespace e2e-tests-configmap-9m5x7 deletion completed in 6.178025429s • [SLOW TEST:19.405 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:25:38.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-0f17d4bf-50af-11ea-aa00-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 16 11:25:39.197: INFO: Waiting up to 5m0s for pod "pod-configmaps-0f195112-50af-11ea-aa00-0242ac110008" in namespace "e2e-tests-configmap-6m7xs" to be "success or failure" Feb 16 11:25:39.208: INFO: Pod "pod-configmaps-0f195112-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.187188ms Feb 16 11:25:41.287: INFO: Pod "pod-configmaps-0f195112-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090454324s Feb 16 11:25:43.321: INFO: Pod "pod-configmaps-0f195112-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123899299s Feb 16 11:25:45.671: INFO: Pod "pod-configmaps-0f195112-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.474614658s Feb 16 11:25:47.688: INFO: Pod "pod-configmaps-0f195112-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.490903412s Feb 16 11:25:49.705: INFO: Pod "pod-configmaps-0f195112-50af-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.507661776s STEP: Saw pod success Feb 16 11:25:49.705: INFO: Pod "pod-configmaps-0f195112-50af-11ea-aa00-0242ac110008" satisfied condition "success or failure" Feb 16 11:25:49.710: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-0f195112-50af-11ea-aa00-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 16 11:25:50.377: INFO: Waiting for pod pod-configmaps-0f195112-50af-11ea-aa00-0242ac110008 to disappear Feb 16 11:25:50.696: INFO: Pod pod-configmaps-0f195112-50af-11ea-aa00-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:25:50.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6m7xs" for this suite. Feb 16 11:25:56.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:25:56.956: INFO: namespace: e2e-tests-configmap-6m7xs, resource: bindings, ignored listing per whitelist Feb 16 11:25:56.964: INFO: namespace e2e-tests-configmap-6m7xs deletion completed in 6.249118159s • [SLOW TEST:18.096 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:25:56.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-n8fvr STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-n8fvr to expose endpoints map[] Feb 16 11:25:57.238: INFO: Get endpoints failed (14.702655ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Feb 16 11:25:58.277: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-n8fvr exposes endpoints map[] (1.054068399s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-n8fvr STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-n8fvr to expose endpoints map[pod1:[100]] Feb 16 11:26:03.357: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.044943863s elapsed, will retry) Feb 16 11:26:09.305: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-n8fvr exposes endpoints map[pod1:[100]] (10.993671965s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-n8fvr STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-n8fvr to expose endpoints map[pod2:[101] pod1:[100]] Feb 16 11:26:13.865: INFO: Unexpected endpoints: found map[1a8a4c64-50af-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.542312011s elapsed, will retry) Feb 16 11:26:19.168: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-n8fvr exposes endpoints map[pod1:[100] pod2:[101]] (9.844644911s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-n8fvr STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-n8fvr to expose endpoints map[pod2:[101]] Feb 16 11:26:20.226: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-n8fvr exposes endpoints map[pod2:[101]] (1.049412164s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-n8fvr STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-n8fvr to expose endpoints map[] Feb 16 11:26:20.364: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-n8fvr exposes endpoints map[] (95.295243ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:26:20.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-n8fvr" for this suite. Feb 16 11:26:44.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:26:44.677: INFO: namespace: e2e-tests-services-n8fvr, resource: bindings, ignored listing per whitelist Feb 16 11:26:44.727: INFO: namespace e2e-tests-services-n8fvr deletion completed in 24.169106938s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:47.762 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:26:44.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 16 11:26:44.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Feb 16 11:26:44.888: INFO: stderr: "" Feb 16 11:26:44.888: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Feb 16 11:26:44.896: INFO: Not supported for server versions before "1.13.12" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 16 11:26:44.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bcqhz" for this suite. Feb 16 11:26:50.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 16 11:26:51.085: INFO: namespace: e2e-tests-kubectl-bcqhz, resource: bindings, ignored listing per whitelist Feb 16 11:26:51.095: INFO: namespace e2e-tests-kubectl-bcqhz deletion completed in 6.187068878s S [SKIPPING] [6.368 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 16 11:26:44.896: Not supported for server versions before "1.13.12" /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 16 11:26:51.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 16 11:26:51.347: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 12.635561ms)
Feb 16 11:26:51.354: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.544343ms)
Feb 16 11:26:51.360: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.017975ms)
Feb 16 11:26:51.366: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.829226ms)
Feb 16 11:26:51.372: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.256589ms)
Feb 16 11:26:51.377: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.753448ms)
Feb 16 11:26:51.429: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 51.959044ms)
Feb 16 11:26:51.438: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.376402ms)
Feb 16 11:26:51.447: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.820374ms)
Feb 16 11:26:51.455: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.804831ms)
Feb 16 11:26:51.463: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.365436ms)
Feb 16 11:26:51.472: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.598948ms)
Feb 16 11:26:51.480: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.392093ms)
Feb 16 11:26:51.489: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.435219ms)
Feb 16 11:26:51.503: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.835671ms)
Feb 16 11:26:51.524: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.666925ms)
Feb 16 11:26:51.580: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 56.555825ms)
Feb 16 11:26:51.605: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 24.828843ms)
Feb 16 11:26:51.618: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.834753ms)
Feb 16 11:26:51.625: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.213368ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:26:51.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-4jrbg" for this suite.
Feb 16 11:26:57.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:26:57.892: INFO: namespace: e2e-tests-proxy-4jrbg, resource: bindings, ignored listing per whitelist
Feb 16 11:26:57.952: INFO: namespace e2e-tests-proxy-4jrbg deletion completed in 6.321060367s

• [SLOW TEST:6.857 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:26:57.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 16 11:26:58.202: INFO: Waiting up to 5m0s for pod "downward-api-3e3c7dfd-50af-11ea-aa00-0242ac110008" in namespace "e2e-tests-downward-api-zhjdh" to be "success or failure"
Feb 16 11:26:58.320: INFO: Pod "downward-api-3e3c7dfd-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 118.06231ms
Feb 16 11:27:00.465: INFO: Pod "downward-api-3e3c7dfd-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263092875s
Feb 16 11:27:02.486: INFO: Pod "downward-api-3e3c7dfd-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.283757572s
Feb 16 11:27:04.932: INFO: Pod "downward-api-3e3c7dfd-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.729457369s
Feb 16 11:27:06.957: INFO: Pod "downward-api-3e3c7dfd-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.75448305s
Feb 16 11:27:08.967: INFO: Pod "downward-api-3e3c7dfd-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.76489084s
Feb 16 11:27:11.030: INFO: Pod "downward-api-3e3c7dfd-50af-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.827376441s
STEP: Saw pod success
Feb 16 11:27:11.030: INFO: Pod "downward-api-3e3c7dfd-50af-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:27:11.039: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-3e3c7dfd-50af-11ea-aa00-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 16 11:27:11.105: INFO: Waiting for pod downward-api-3e3c7dfd-50af-11ea-aa00-0242ac110008 to disappear
Feb 16 11:27:11.110: INFO: Pod downward-api-3e3c7dfd-50af-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:27:11.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zhjdh" for this suite.
Feb 16 11:27:17.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:27:17.345: INFO: namespace: e2e-tests-downward-api-zhjdh, resource: bindings, ignored listing per whitelist
Feb 16 11:27:17.421: INFO: namespace e2e-tests-downward-api-zhjdh deletion completed in 6.297350548s

• [SLOW TEST:19.468 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:27:17.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-49cd8732-50af-11ea-aa00-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 16 11:27:17.613: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-49cea19f-50af-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-kz8lj" to be "success or failure"
Feb 16 11:27:17.620: INFO: Pod "pod-projected-configmaps-49cea19f-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.783961ms
Feb 16 11:27:19.655: INFO: Pod "pod-projected-configmaps-49cea19f-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042183663s
Feb 16 11:27:21.726: INFO: Pod "pod-projected-configmaps-49cea19f-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11296477s
Feb 16 11:27:24.224: INFO: Pod "pod-projected-configmaps-49cea19f-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.610969904s
Feb 16 11:27:26.253: INFO: Pod "pod-projected-configmaps-49cea19f-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.639461108s
Feb 16 11:27:28.268: INFO: Pod "pod-projected-configmaps-49cea19f-50af-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.655246421s
STEP: Saw pod success
Feb 16 11:27:28.268: INFO: Pod "pod-projected-configmaps-49cea19f-50af-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:27:28.276: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-49cea19f-50af-11ea-aa00-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 16 11:27:29.190: INFO: Waiting for pod pod-projected-configmaps-49cea19f-50af-11ea-aa00-0242ac110008 to disappear
Feb 16 11:27:29.867: INFO: Pod pod-projected-configmaps-49cea19f-50af-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:27:29.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kz8lj" for this suite.
Feb 16 11:27:36.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:27:36.147: INFO: namespace: e2e-tests-projected-kz8lj, resource: bindings, ignored listing per whitelist
Feb 16 11:27:36.328: INFO: namespace e2e-tests-projected-kz8lj deletion completed in 6.427341692s

• [SLOW TEST:18.908 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:27:36.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-552706b5-50af-11ea-aa00-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 16 11:27:36.758: INFO: Waiting up to 5m0s for pod "pod-configmaps-55378462-50af-11ea-aa00-0242ac110008" in namespace "e2e-tests-configmap-fdqh2" to be "success or failure"
Feb 16 11:27:36.770: INFO: Pod "pod-configmaps-55378462-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.877034ms
Feb 16 11:27:38.786: INFO: Pod "pod-configmaps-55378462-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027972504s
Feb 16 11:27:40.817: INFO: Pod "pod-configmaps-55378462-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058881222s
Feb 16 11:27:42.885: INFO: Pod "pod-configmaps-55378462-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127447216s
Feb 16 11:27:44.902: INFO: Pod "pod-configmaps-55378462-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.144509161s
Feb 16 11:27:46.910: INFO: Pod "pod-configmaps-55378462-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.152102768s
Feb 16 11:27:49.029: INFO: Pod "pod-configmaps-55378462-50af-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.271342734s
STEP: Saw pod success
Feb 16 11:27:49.029: INFO: Pod "pod-configmaps-55378462-50af-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:27:49.282: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-55378462-50af-11ea-aa00-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 16 11:27:49.412: INFO: Waiting for pod pod-configmaps-55378462-50af-11ea-aa00-0242ac110008 to disappear
Feb 16 11:27:49.422: INFO: Pod pod-configmaps-55378462-50af-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:27:49.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-fdqh2" for this suite.
Feb 16 11:27:55.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:27:55.563: INFO: namespace: e2e-tests-configmap-fdqh2, resource: bindings, ignored listing per whitelist
Feb 16 11:27:55.623: INFO: namespace e2e-tests-configmap-fdqh2 deletion completed in 6.186636606s

• [SLOW TEST:19.295 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:27:55.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 16 11:27:55.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-sgk4g'
Feb 16 11:27:57.981: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 16 11:27:57.981: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Feb 16 11:27:57.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-sgk4g'
Feb 16 11:27:58.193: INFO: stderr: ""
Feb 16 11:27:58.193: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:27:58.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-sgk4g" for this suite.
Feb 16 11:28:06.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:28:06.427: INFO: namespace: e2e-tests-kubectl-sgk4g, resource: bindings, ignored listing per whitelist
Feb 16 11:28:06.635: INFO: namespace e2e-tests-kubectl-sgk4g deletion completed in 8.344607036s

• [SLOW TEST:11.011 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:28:06.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 16 11:28:07.005: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6734fe5f-50af-11ea-aa00-0242ac110008" in namespace "e2e-tests-downward-api-bff42" to be "success or failure"
Feb 16 11:28:07.016: INFO: Pod "downwardapi-volume-6734fe5f-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.468556ms
Feb 16 11:28:09.029: INFO: Pod "downwardapi-volume-6734fe5f-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023154654s
Feb 16 11:28:11.059: INFO: Pod "downwardapi-volume-6734fe5f-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053432675s
Feb 16 11:28:13.561: INFO: Pod "downwardapi-volume-6734fe5f-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.555544018s
Feb 16 11:28:15.579: INFO: Pod "downwardapi-volume-6734fe5f-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.573470428s
Feb 16 11:28:17.598: INFO: Pod "downwardapi-volume-6734fe5f-50af-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.592553306s
STEP: Saw pod success
Feb 16 11:28:17.598: INFO: Pod "downwardapi-volume-6734fe5f-50af-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:28:17.605: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6734fe5f-50af-11ea-aa00-0242ac110008 container client-container: 
STEP: delete the pod
Feb 16 11:28:19.198: INFO: Waiting for pod downwardapi-volume-6734fe5f-50af-11ea-aa00-0242ac110008 to disappear
Feb 16 11:28:20.371: INFO: Pod downwardapi-volume-6734fe5f-50af-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:28:20.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bff42" for this suite.
Feb 16 11:28:26.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:28:26.621: INFO: namespace: e2e-tests-downward-api-bff42, resource: bindings, ignored listing per whitelist
Feb 16 11:28:26.811: INFO: namespace e2e-tests-downward-api-bff42 deletion completed in 6.412668153s

• [SLOW TEST:20.175 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:28:26.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 16 11:28:27.040: INFO: Waiting up to 5m0s for pod "pod-73309e43-50af-11ea-aa00-0242ac110008" in namespace "e2e-tests-emptydir-f29rb" to be "success or failure"
Feb 16 11:28:27.077: INFO: Pod "pod-73309e43-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 37.674003ms
Feb 16 11:28:29.088: INFO: Pod "pod-73309e43-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048237966s
Feb 16 11:28:31.107: INFO: Pod "pod-73309e43-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067517794s
Feb 16 11:28:33.205: INFO: Pod "pod-73309e43-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.165067361s
Feb 16 11:28:35.224: INFO: Pod "pod-73309e43-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.184039258s
Feb 16 11:28:37.635: INFO: Pod "pod-73309e43-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.595484835s
Feb 16 11:28:39.665: INFO: Pod "pod-73309e43-50af-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.625662306s
STEP: Saw pod success
Feb 16 11:28:39.665: INFO: Pod "pod-73309e43-50af-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:28:39.672: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-73309e43-50af-11ea-aa00-0242ac110008 container test-container: 
STEP: delete the pod
Feb 16 11:28:40.149: INFO: Waiting for pod pod-73309e43-50af-11ea-aa00-0242ac110008 to disappear
Feb 16 11:28:40.356: INFO: Pod pod-73309e43-50af-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:28:40.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-f29rb" for this suite.
Feb 16 11:28:46.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:28:46.562: INFO: namespace: e2e-tests-emptydir-f29rb, resource: bindings, ignored listing per whitelist
Feb 16 11:28:46.719: INFO: namespace e2e-tests-emptydir-f29rb deletion completed in 6.34174488s

• [SLOW TEST:19.907 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:28:46.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 16 11:28:47.009: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:29:06.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-p8bdk" for this suite.
Feb 16 11:29:14.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:29:14.777: INFO: namespace: e2e-tests-init-container-p8bdk, resource: bindings, ignored listing per whitelist
Feb 16 11:29:14.793: INFO: namespace e2e-tests-init-container-p8bdk deletion completed in 8.365866588s

• [SLOW TEST:28.074 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:29:14.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:29:25.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-2dfl8" for this suite.
Feb 16 11:29:31.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:29:31.782: INFO: namespace: e2e-tests-emptydir-wrapper-2dfl8, resource: bindings, ignored listing per whitelist
Feb 16 11:29:31.805: INFO: namespace e2e-tests-emptydir-wrapper-2dfl8 deletion completed in 6.29365467s

• [SLOW TEST:17.011 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:29:31.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-99f42f1d-50af-11ea-aa00-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 16 11:29:32.085: INFO: Waiting up to 5m0s for pod "pod-secrets-99f4c124-50af-11ea-aa00-0242ac110008" in namespace "e2e-tests-secrets-wnpdl" to be "success or failure"
Feb 16 11:29:32.097: INFO: Pod "pod-secrets-99f4c124-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.115589ms
Feb 16 11:29:34.277: INFO: Pod "pod-secrets-99f4c124-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191790848s
Feb 16 11:29:36.290: INFO: Pod "pod-secrets-99f4c124-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205230196s
Feb 16 11:29:38.654: INFO: Pod "pod-secrets-99f4c124-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.569503577s
Feb 16 11:29:40.670: INFO: Pod "pod-secrets-99f4c124-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.584668129s
Feb 16 11:29:42.686: INFO: Pod "pod-secrets-99f4c124-50af-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.601236699s
STEP: Saw pod success
Feb 16 11:29:42.686: INFO: Pod "pod-secrets-99f4c124-50af-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:29:42.693: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-99f4c124-50af-11ea-aa00-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 16 11:29:42.883: INFO: Waiting for pod pod-secrets-99f4c124-50af-11ea-aa00-0242ac110008 to disappear
Feb 16 11:29:42.891: INFO: Pod pod-secrets-99f4c124-50af-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:29:42.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-wnpdl" for this suite.
Feb 16 11:29:48.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:29:49.006: INFO: namespace: e2e-tests-secrets-wnpdl, resource: bindings, ignored listing per whitelist
Feb 16 11:29:49.320: INFO: namespace e2e-tests-secrets-wnpdl deletion completed in 6.422402598s

• [SLOW TEST:17.514 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:29:49.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-a45cdf38-50af-11ea-aa00-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 16 11:29:49.548: INFO: Waiting up to 5m0s for pod "pod-secrets-a45de998-50af-11ea-aa00-0242ac110008" in namespace "e2e-tests-secrets-rsc55" to be "success or failure"
Feb 16 11:29:49.725: INFO: Pod "pod-secrets-a45de998-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 176.728781ms
Feb 16 11:29:51.740: INFO: Pod "pod-secrets-a45de998-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191902917s
Feb 16 11:29:53.772: INFO: Pod "pod-secrets-a45de998-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.223725915s
Feb 16 11:29:56.027: INFO: Pod "pod-secrets-a45de998-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.478934857s
Feb 16 11:29:58.041: INFO: Pod "pod-secrets-a45de998-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.492408369s
Feb 16 11:30:00.783: INFO: Pod "pod-secrets-a45de998-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.234730055s
Feb 16 11:30:02.834: INFO: Pod "pod-secrets-a45de998-50af-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.285822659s
STEP: Saw pod success
Feb 16 11:30:02.835: INFO: Pod "pod-secrets-a45de998-50af-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:30:02.847: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-a45de998-50af-11ea-aa00-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 16 11:30:03.357: INFO: Waiting for pod pod-secrets-a45de998-50af-11ea-aa00-0242ac110008 to disappear
Feb 16 11:30:03.374: INFO: Pod pod-secrets-a45de998-50af-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:30:03.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-rsc55" for this suite.
Feb 16 11:30:09.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:30:09.462: INFO: namespace: e2e-tests-secrets-rsc55, resource: bindings, ignored listing per whitelist
Feb 16 11:30:09.658: INFO: namespace e2e-tests-secrets-rsc55 deletion completed in 6.276147097s

• [SLOW TEST:20.339 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:30:09.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:30:16.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-hgxbb" for this suite.
Feb 16 11:30:22.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:30:22.734: INFO: namespace: e2e-tests-namespaces-hgxbb, resource: bindings, ignored listing per whitelist
Feb 16 11:30:22.832: INFO: namespace e2e-tests-namespaces-hgxbb deletion completed in 6.205739886s
STEP: Destroying namespace "e2e-tests-nsdeletetest-k7pn5" for this suite.
Feb 16 11:30:22.836: INFO: Namespace e2e-tests-nsdeletetest-k7pn5 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-zd7h5" for this suite.
Feb 16 11:30:28.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:30:29.020: INFO: namespace: e2e-tests-nsdeletetest-zd7h5, resource: bindings, ignored listing per whitelist
Feb 16 11:30:29.054: INFO: namespace e2e-tests-nsdeletetest-zd7h5 deletion completed in 6.217619595s

• [SLOW TEST:19.395 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:30:29.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-bc024a97-50af-11ea-aa00-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 16 11:30:29.219: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bc030501-50af-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-g9fhx" to be "success or failure"
Feb 16 11:30:29.302: INFO: Pod "pod-projected-secrets-bc030501-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 83.404516ms
Feb 16 11:30:31.320: INFO: Pod "pod-projected-secrets-bc030501-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100909117s
Feb 16 11:30:33.336: INFO: Pod "pod-projected-secrets-bc030501-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117534474s
Feb 16 11:30:35.352: INFO: Pod "pod-projected-secrets-bc030501-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132864592s
Feb 16 11:30:37.390: INFO: Pod "pod-projected-secrets-bc030501-50af-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.171327043s
Feb 16 11:30:39.409: INFO: Pod "pod-projected-secrets-bc030501-50af-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.190010539s
STEP: Saw pod success
Feb 16 11:30:39.409: INFO: Pod "pod-projected-secrets-bc030501-50af-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:30:39.418: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-bc030501-50af-11ea-aa00-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Feb 16 11:30:40.076: INFO: Waiting for pod pod-projected-secrets-bc030501-50af-11ea-aa00-0242ac110008 to disappear
Feb 16 11:30:40.317: INFO: Pod pod-projected-secrets-bc030501-50af-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:30:40.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-g9fhx" for this suite.
Feb 16 11:30:46.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:30:46.438: INFO: namespace: e2e-tests-projected-g9fhx, resource: bindings, ignored listing per whitelist
Feb 16 11:30:46.758: INFO: namespace e2e-tests-projected-g9fhx deletion completed in 6.413744339s

• [SLOW TEST:17.704 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:30:46.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-kqqnp
Feb 16 11:30:57.159: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-kqqnp
STEP: checking the pod's current state and verifying that restartCount is present
Feb 16 11:30:57.165: INFO: Initial restart count of pod liveness-http is 0
Feb 16 11:31:17.921: INFO: Restart count of pod e2e-tests-container-probe-kqqnp/liveness-http is now 1 (20.756534568s elapsed)
Feb 16 11:31:38.370: INFO: Restart count of pod e2e-tests-container-probe-kqqnp/liveness-http is now 2 (41.205443137s elapsed)
Feb 16 11:31:57.471: INFO: Restart count of pod e2e-tests-container-probe-kqqnp/liveness-http is now 3 (1m0.30617786s elapsed)
Feb 16 11:32:17.844: INFO: Restart count of pod e2e-tests-container-probe-kqqnp/liveness-http is now 4 (1m20.679360015s elapsed)
Feb 16 11:33:20.811: INFO: Restart count of pod e2e-tests-container-probe-kqqnp/liveness-http is now 5 (2m23.645947983s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:33:20.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-kqqnp" for this suite.
Feb 16 11:33:27.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:33:27.044: INFO: namespace: e2e-tests-container-probe-kqqnp, resource: bindings, ignored listing per whitelist
Feb 16 11:33:27.199: INFO: namespace e2e-tests-container-probe-kqqnp deletion completed in 6.315554273s

• [SLOW TEST:160.440 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:33:27.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Feb 16 11:33:27.437: INFO: Waiting up to 5m0s for pod "var-expansion-263cf79b-50b0-11ea-aa00-0242ac110008" in namespace "e2e-tests-var-expansion-r5hjl" to be "success or failure"
Feb 16 11:33:27.503: INFO: Pod "var-expansion-263cf79b-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 65.597951ms
Feb 16 11:33:29.572: INFO: Pod "var-expansion-263cf79b-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135282062s
Feb 16 11:33:31.584: INFO: Pod "var-expansion-263cf79b-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146732682s
Feb 16 11:33:34.117: INFO: Pod "var-expansion-263cf79b-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.680002109s
Feb 16 11:33:36.127: INFO: Pod "var-expansion-263cf79b-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.689538796s
Feb 16 11:33:38.965: INFO: Pod "var-expansion-263cf79b-50b0-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.528269252s
STEP: Saw pod success
Feb 16 11:33:38.966: INFO: Pod "var-expansion-263cf79b-50b0-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:33:39.183: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-263cf79b-50b0-11ea-aa00-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 16 11:33:39.375: INFO: Waiting for pod var-expansion-263cf79b-50b0-11ea-aa00-0242ac110008 to disappear
Feb 16 11:33:39.389: INFO: Pod var-expansion-263cf79b-50b0-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:33:39.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-r5hjl" for this suite.
Feb 16 11:33:45.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:33:45.623: INFO: namespace: e2e-tests-var-expansion-r5hjl, resource: bindings, ignored listing per whitelist
Feb 16 11:33:45.659: INFO: namespace e2e-tests-var-expansion-r5hjl deletion completed in 6.256689289s

• [SLOW TEST:18.460 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:33:45.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-314348a3-50b0-11ea-aa00-0242ac110008
Feb 16 11:33:45.939: INFO: Pod name my-hostname-basic-314348a3-50b0-11ea-aa00-0242ac110008: Found 0 pods out of 1
Feb 16 11:33:50.955: INFO: Pod name my-hostname-basic-314348a3-50b0-11ea-aa00-0242ac110008: Found 1 pods out of 1
Feb 16 11:33:50.955: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-314348a3-50b0-11ea-aa00-0242ac110008" are running
Feb 16 11:33:55.010: INFO: Pod "my-hostname-basic-314348a3-50b0-11ea-aa00-0242ac110008-sd6lm" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-16 11:33:46 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-16 11:33:46 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-314348a3-50b0-11ea-aa00-0242ac110008]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-16 11:33:46 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-314348a3-50b0-11ea-aa00-0242ac110008]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-16 11:33:45 +0000 UTC Reason: Message:}])
Feb 16 11:33:55.010: INFO: Trying to dial the pod
Feb 16 11:34:00.071: INFO: Controller my-hostname-basic-314348a3-50b0-11ea-aa00-0242ac110008: Got expected result from replica 1 [my-hostname-basic-314348a3-50b0-11ea-aa00-0242ac110008-sd6lm]: "my-hostname-basic-314348a3-50b0-11ea-aa00-0242ac110008-sd6lm", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:34:00.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-s8vbn" for this suite.
Feb 16 11:34:08.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:34:08.341: INFO: namespace: e2e-tests-replication-controller-s8vbn, resource: bindings, ignored listing per whitelist
Feb 16 11:34:08.544: INFO: namespace e2e-tests-replication-controller-s8vbn deletion completed in 8.461487519s

• [SLOW TEST:22.885 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:34:08.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-3ee74bc8-50b0-11ea-aa00-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 16 11:34:08.922: INFO: Waiting up to 5m0s for pod "pod-configmaps-3ef2f8fd-50b0-11ea-aa00-0242ac110008" in namespace "e2e-tests-configmap-kdljh" to be "success or failure"
Feb 16 11:34:08.952: INFO: Pod "pod-configmaps-3ef2f8fd-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 29.137822ms
Feb 16 11:34:10.976: INFO: Pod "pod-configmaps-3ef2f8fd-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052918133s
Feb 16 11:34:13.767: INFO: Pod "pod-configmaps-3ef2f8fd-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.844283356s
Feb 16 11:34:15.794: INFO: Pod "pod-configmaps-3ef2f8fd-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.87145199s
Feb 16 11:34:17.813: INFO: Pod "pod-configmaps-3ef2f8fd-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.890258231s
Feb 16 11:34:19.830: INFO: Pod "pod-configmaps-3ef2f8fd-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.907649031s
Feb 16 11:34:21.862: INFO: Pod "pod-configmaps-3ef2f8fd-50b0-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.939729045s
STEP: Saw pod success
Feb 16 11:34:21.863: INFO: Pod "pod-configmaps-3ef2f8fd-50b0-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:34:21.875: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3ef2f8fd-50b0-11ea-aa00-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 16 11:34:22.084: INFO: Waiting for pod pod-configmaps-3ef2f8fd-50b0-11ea-aa00-0242ac110008 to disappear
Feb 16 11:34:22.092: INFO: Pod pod-configmaps-3ef2f8fd-50b0-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:34:22.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-kdljh" for this suite.
Feb 16 11:34:28.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:34:28.220: INFO: namespace: e2e-tests-configmap-kdljh, resource: bindings, ignored listing per whitelist
Feb 16 11:34:28.327: INFO: namespace e2e-tests-configmap-kdljh deletion completed in 6.227150005s

• [SLOW TEST:19.782 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:34:28.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 16 11:34:28.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-95jhn'
Feb 16 11:34:28.703: INFO: stderr: ""
Feb 16 11:34:28.703: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Feb 16 11:34:28.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-95jhn'
Feb 16 11:34:31.919: INFO: stderr: ""
Feb 16 11:34:31.920: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:34:31.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-95jhn" for this suite.
Feb 16 11:34:38.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:34:38.144: INFO: namespace: e2e-tests-kubectl-95jhn, resource: bindings, ignored listing per whitelist
Feb 16 11:34:38.227: INFO: namespace e2e-tests-kubectl-95jhn deletion completed in 6.293436559s

• [SLOW TEST:9.900 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:34:38.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-50892e93-50b0-11ea-aa00-0242ac110008
STEP: Creating configMap with name cm-test-opt-upd-50892fc1-50b0-11ea-aa00-0242ac110008
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-50892e93-50b0-11ea-aa00-0242ac110008
STEP: Updating configmap cm-test-opt-upd-50892fc1-50b0-11ea-aa00-0242ac110008
STEP: Creating configMap with name cm-test-opt-create-50892fe2-50b0-11ea-aa00-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:34:57.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jpdlc" for this suite.
Feb 16 11:35:21.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:35:21.277: INFO: namespace: e2e-tests-projected-jpdlc, resource: bindings, ignored listing per whitelist
Feb 16 11:35:21.345: INFO: namespace e2e-tests-projected-jpdlc deletion completed in 24.270268291s

• [SLOW TEST:43.118 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:35:21.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-vkrwl/configmap-test-6a3e7024-50b0-11ea-aa00-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 16 11:35:21.534: INFO: Waiting up to 5m0s for pod "pod-configmaps-6a3f7ab6-50b0-11ea-aa00-0242ac110008" in namespace "e2e-tests-configmap-vkrwl" to be "success or failure"
Feb 16 11:35:21.560: INFO: Pod "pod-configmaps-6a3f7ab6-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 25.837733ms
Feb 16 11:35:23.575: INFO: Pod "pod-configmaps-6a3f7ab6-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040931976s
Feb 16 11:35:25.593: INFO: Pod "pod-configmaps-6a3f7ab6-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058525895s
Feb 16 11:35:28.379: INFO: Pod "pod-configmaps-6a3f7ab6-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.84500258s
Feb 16 11:35:30.397: INFO: Pod "pod-configmaps-6a3f7ab6-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.862584133s
Feb 16 11:35:32.416: INFO: Pod "pod-configmaps-6a3f7ab6-50b0-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.88188233s
STEP: Saw pod success
Feb 16 11:35:32.416: INFO: Pod "pod-configmaps-6a3f7ab6-50b0-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:35:32.422: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-6a3f7ab6-50b0-11ea-aa00-0242ac110008 container env-test: 
STEP: delete the pod
Feb 16 11:35:32.914: INFO: Waiting for pod pod-configmaps-6a3f7ab6-50b0-11ea-aa00-0242ac110008 to disappear
Feb 16 11:35:32.925: INFO: Pod pod-configmaps-6a3f7ab6-50b0-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:35:32.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vkrwl" for this suite.
Feb 16 11:35:39.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:35:39.051: INFO: namespace: e2e-tests-configmap-vkrwl, resource: bindings, ignored listing per whitelist
Feb 16 11:35:39.129: INFO: namespace e2e-tests-configmap-vkrwl deletion completed in 6.19845161s

• [SLOW TEST:17.784 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:35:39.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb 16 11:35:39.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:35:40.044: INFO: stderr: ""
Feb 16 11:35:40.044: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 16 11:35:40.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:35:40.240: INFO: stderr: ""
Feb 16 11:35:40.241: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Feb 16 11:35:45.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:35:45.401: INFO: stderr: ""
Feb 16 11:35:45.401: INFO: stdout: "update-demo-nautilus-jmhqt update-demo-nautilus-mz4kf "
Feb 16 11:35:45.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jmhqt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:35:45.506: INFO: stderr: ""
Feb 16 11:35:45.506: INFO: stdout: ""
Feb 16 11:35:45.506: INFO: update-demo-nautilus-jmhqt is created but not running
Feb 16 11:35:50.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:35:50.659: INFO: stderr: ""
Feb 16 11:35:50.660: INFO: stdout: "update-demo-nautilus-jmhqt update-demo-nautilus-mz4kf "
Feb 16 11:35:50.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jmhqt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:35:50.747: INFO: stderr: ""
Feb 16 11:35:50.747: INFO: stdout: ""
Feb 16 11:35:50.747: INFO: update-demo-nautilus-jmhqt is created but not running
Feb 16 11:35:55.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:35:55.936: INFO: stderr: ""
Feb 16 11:35:55.937: INFO: stdout: "update-demo-nautilus-jmhqt update-demo-nautilus-mz4kf "
Feb 16 11:35:55.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jmhqt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:35:56.069: INFO: stderr: ""
Feb 16 11:35:56.069: INFO: stdout: "true"
Feb 16 11:35:56.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jmhqt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:35:56.178: INFO: stderr: ""
Feb 16 11:35:56.178: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 16 11:35:56.178: INFO: validating pod update-demo-nautilus-jmhqt
Feb 16 11:35:56.205: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 16 11:35:56.205: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 16 11:35:56.205: INFO: update-demo-nautilus-jmhqt is verified up and running
Feb 16 11:35:56.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mz4kf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:35:56.318: INFO: stderr: ""
Feb 16 11:35:56.318: INFO: stdout: "true"
Feb 16 11:35:56.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mz4kf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:35:56.467: INFO: stderr: ""
Feb 16 11:35:56.467: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 16 11:35:56.467: INFO: validating pod update-demo-nautilus-mz4kf
Feb 16 11:35:56.490: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 16 11:35:56.490: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 16 11:35:56.490: INFO: update-demo-nautilus-mz4kf is verified up and running
STEP: scaling down the replication controller
Feb 16 11:35:56.494: INFO: scanned /root for discovery docs: 
Feb 16 11:35:56.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:35:57.811: INFO: stderr: ""
Feb 16 11:35:57.811: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 16 11:35:57.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:35:57.986: INFO: stderr: ""
Feb 16 11:35:57.987: INFO: stdout: "update-demo-nautilus-jmhqt update-demo-nautilus-mz4kf "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 16 11:36:02.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:36:03.201: INFO: stderr: ""
Feb 16 11:36:03.201: INFO: stdout: "update-demo-nautilus-mz4kf "
Feb 16 11:36:03.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mz4kf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:36:03.359: INFO: stderr: ""
Feb 16 11:36:03.360: INFO: stdout: "true"
Feb 16 11:36:03.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mz4kf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:36:03.457: INFO: stderr: ""
Feb 16 11:36:03.457: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 16 11:36:03.457: INFO: validating pod update-demo-nautilus-mz4kf
Feb 16 11:36:03.472: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 16 11:36:03.472: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 16 11:36:03.472: INFO: update-demo-nautilus-mz4kf is verified up and running
STEP: scaling up the replication controller
Feb 16 11:36:03.475: INFO: scanned /root for discovery docs: 
Feb 16 11:36:03.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:36:04.656: INFO: stderr: ""
Feb 16 11:36:04.657: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 16 11:36:04.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:36:05.398: INFO: stderr: ""
Feb 16 11:36:05.399: INFO: stdout: "update-demo-nautilus-mmlkg update-demo-nautilus-mz4kf "
Feb 16 11:36:05.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mmlkg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:36:05.625: INFO: stderr: ""
Feb 16 11:36:05.625: INFO: stdout: ""
Feb 16 11:36:05.625: INFO: update-demo-nautilus-mmlkg is created but not running
Feb 16 11:36:10.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:36:11.813: INFO: stderr: ""
Feb 16 11:36:11.813: INFO: stdout: "update-demo-nautilus-mmlkg update-demo-nautilus-mz4kf "
Feb 16 11:36:11.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mmlkg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:36:12.055: INFO: stderr: ""
Feb 16 11:36:12.055: INFO: stdout: ""
Feb 16 11:36:12.055: INFO: update-demo-nautilus-mmlkg is created but not running
Feb 16 11:36:17.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:36:17.259: INFO: stderr: ""
Feb 16 11:36:17.259: INFO: stdout: "update-demo-nautilus-mmlkg update-demo-nautilus-mz4kf "
Feb 16 11:36:17.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mmlkg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:36:17.440: INFO: stderr: ""
Feb 16 11:36:17.440: INFO: stdout: "true"
Feb 16 11:36:17.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mmlkg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:36:17.536: INFO: stderr: ""
Feb 16 11:36:17.536: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 16 11:36:17.536: INFO: validating pod update-demo-nautilus-mmlkg
Feb 16 11:36:17.545: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 16 11:36:17.545: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 16 11:36:17.545: INFO: update-demo-nautilus-mmlkg is verified up and running
Feb 16 11:36:17.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mz4kf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:36:17.660: INFO: stderr: ""
Feb 16 11:36:17.660: INFO: stdout: "true"
Feb 16 11:36:17.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mz4kf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:36:17.788: INFO: stderr: ""
Feb 16 11:36:17.789: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 16 11:36:17.789: INFO: validating pod update-demo-nautilus-mz4kf
Feb 16 11:36:17.796: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 16 11:36:17.796: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 16 11:36:17.796: INFO: update-demo-nautilus-mz4kf is verified up and running
STEP: using delete to clean up resources
Feb 16 11:36:17.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:36:17.950: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 16 11:36:17.950: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 16 11:36:17.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-cj75d'
Feb 16 11:36:18.285: INFO: stderr: "No resources found.\n"
Feb 16 11:36:18.285: INFO: stdout: ""
Feb 16 11:36:18.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-cj75d -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 16 11:36:18.440: INFO: stderr: ""
Feb 16 11:36:18.440: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:36:18.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-cj75d" for this suite.
Feb 16 11:36:42.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:36:42.614: INFO: namespace: e2e-tests-kubectl-cj75d, resource: bindings, ignored listing per whitelist
Feb 16 11:36:42.783: INFO: namespace e2e-tests-kubectl-cj75d deletion completed in 24.32855379s

• [SLOW TEST:63.654 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:36:42.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:36:53.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-5wg6d" for this suite.
Feb 16 11:37:37.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:37:37.418: INFO: namespace: e2e-tests-kubelet-test-5wg6d, resource: bindings, ignored listing per whitelist
Feb 16 11:37:37.462: INFO: namespace e2e-tests-kubelet-test-5wg6d deletion completed in 44.342067319s

• [SLOW TEST:54.679 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:37:37.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 16 11:37:37.658: INFO: Waiting up to 5m0s for pod "pod-bb61fdd7-50b0-11ea-aa00-0242ac110008" in namespace "e2e-tests-emptydir-q5w4q" to be "success or failure"
Feb 16 11:37:37.663: INFO: Pod "pod-bb61fdd7-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.412544ms
Feb 16 11:37:39.812: INFO: Pod "pod-bb61fdd7-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154619112s
Feb 16 11:37:41.838: INFO: Pod "pod-bb61fdd7-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180388342s
Feb 16 11:37:44.044: INFO: Pod "pod-bb61fdd7-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.386046463s
Feb 16 11:37:46.075: INFO: Pod "pod-bb61fdd7-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.417669422s
Feb 16 11:37:48.089: INFO: Pod "pod-bb61fdd7-50b0-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.431175425s
STEP: Saw pod success
Feb 16 11:37:48.089: INFO: Pod "pod-bb61fdd7-50b0-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:37:48.093: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-bb61fdd7-50b0-11ea-aa00-0242ac110008 container test-container: 
STEP: delete the pod
Feb 16 11:37:49.075: INFO: Waiting for pod pod-bb61fdd7-50b0-11ea-aa00-0242ac110008 to disappear
Feb 16 11:37:49.093: INFO: Pod pod-bb61fdd7-50b0-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:37:49.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-q5w4q" for this suite.
Feb 16 11:37:55.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:37:55.414: INFO: namespace: e2e-tests-emptydir-q5w4q, resource: bindings, ignored listing per whitelist
Feb 16 11:37:55.429: INFO: namespace e2e-tests-emptydir-q5w4q deletion completed in 6.322964257s

• [SLOW TEST:17.966 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:37:55.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Feb 16 11:37:56.184: INFO: Waiting up to 5m0s for pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-gq4x8" in namespace "e2e-tests-svcaccounts-jfphx" to be "success or failure"
Feb 16 11:37:56.208: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-gq4x8": Phase="Pending", Reason="", readiness=false. Elapsed: 23.400164ms
Feb 16 11:37:58.394: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-gq4x8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209457463s
Feb 16 11:38:00.411: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-gq4x8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.227346658s
Feb 16 11:38:02.426: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-gq4x8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.2420198s
Feb 16 11:38:04.631: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-gq4x8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.447325864s
Feb 16 11:38:06.669: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-gq4x8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.484735672s
Feb 16 11:38:08.696: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-gq4x8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.511647311s
Feb 16 11:38:10.747: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-gq4x8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.562673423s
Feb 16 11:38:12.765: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-gq4x8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.581276933s
STEP: Saw pod success
Feb 16 11:38:12.766: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-gq4x8" satisfied condition "success or failure"
Feb 16 11:38:12.775: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-gq4x8 container token-test: 
STEP: delete the pod
Feb 16 11:38:13.056: INFO: Waiting for pod pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-gq4x8 to disappear
Feb 16 11:38:13.066: INFO: Pod pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-gq4x8 no longer exists
STEP: Creating a pod to test consume service account root CA
Feb 16 11:38:13.087: INFO: Waiting up to 5m0s for pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-ts8z6" in namespace "e2e-tests-svcaccounts-jfphx" to be "success or failure"
Feb 16 11:38:13.107: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-ts8z6": Phase="Pending", Reason="", readiness=false. Elapsed: 19.295633ms
Feb 16 11:38:15.421: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-ts8z6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.333548221s
Feb 16 11:38:17.441: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-ts8z6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.353605847s
Feb 16 11:38:19.595: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-ts8z6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.508056968s
Feb 16 11:38:21.619: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-ts8z6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.531486814s
Feb 16 11:38:23.776: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-ts8z6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.688371718s
Feb 16 11:38:26.780: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-ts8z6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.692875845s
Feb 16 11:38:28.792: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-ts8z6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.704818028s
Feb 16 11:38:30.803: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-ts8z6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.715301857s
STEP: Saw pod success
Feb 16 11:38:30.803: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-ts8z6" satisfied condition "success or failure"
Feb 16 11:38:30.806: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-ts8z6 container root-ca-test: 
STEP: delete the pod
Feb 16 11:38:31.566: INFO: Waiting for pod pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-ts8z6 to disappear
Feb 16 11:38:31.605: INFO: Pod pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-ts8z6 no longer exists
STEP: Creating a pod to test consume service account namespace
Feb 16 11:38:31.684: INFO: Waiting up to 5m0s for pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-9vztw" in namespace "e2e-tests-svcaccounts-jfphx" to be "success or failure"
Feb 16 11:38:31.701: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-9vztw": Phase="Pending", Reason="", readiness=false. Elapsed: 16.764052ms
Feb 16 11:38:33.727: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-9vztw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04238464s
Feb 16 11:38:35.748: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-9vztw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063416748s
Feb 16 11:38:37.863: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-9vztw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.178902261s
Feb 16 11:38:40.021: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-9vztw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.336429721s
Feb 16 11:38:42.041: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-9vztw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.357037397s
Feb 16 11:38:44.538: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-9vztw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.853514436s
Feb 16 11:38:46.576: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-9vztw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.89127512s
Feb 16 11:38:49.077: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-9vztw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.393058469s
STEP: Saw pod success
Feb 16 11:38:49.078: INFO: Pod "pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-9vztw" satisfied condition "success or failure"
Feb 16 11:38:49.084: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-9vztw container namespace-test: 
STEP: delete the pod
Feb 16 11:38:49.646: INFO: Waiting for pod pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-9vztw to disappear
Feb 16 11:38:49.656: INFO: Pod pod-service-account-c66a0d19-50b0-11ea-aa00-0242ac110008-9vztw no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:38:49.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-jfphx" for this suite.
Feb 16 11:38:57.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:38:57.895: INFO: namespace: e2e-tests-svcaccounts-jfphx, resource: bindings, ignored listing per whitelist
Feb 16 11:38:57.976: INFO: namespace e2e-tests-svcaccounts-jfphx deletion completed in 8.305132875s

• [SLOW TEST:62.547 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:38:57.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-eb6e332d-50b0-11ea-aa00-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 16 11:38:58.336: INFO: Waiting up to 5m0s for pod "pod-configmaps-eb6f4e5e-50b0-11ea-aa00-0242ac110008" in namespace "e2e-tests-configmap-7dwqb" to be "success or failure"
Feb 16 11:38:58.355: INFO: Pod "pod-configmaps-eb6f4e5e-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 18.382843ms
Feb 16 11:39:00.376: INFO: Pod "pod-configmaps-eb6f4e5e-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039796547s
Feb 16 11:39:02.395: INFO: Pod "pod-configmaps-eb6f4e5e-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05847088s
Feb 16 11:39:04.537: INFO: Pod "pod-configmaps-eb6f4e5e-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.200554753s
Feb 16 11:39:07.214: INFO: Pod "pod-configmaps-eb6f4e5e-50b0-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.877560766s
Feb 16 11:39:09.232: INFO: Pod "pod-configmaps-eb6f4e5e-50b0-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.895375737s
STEP: Saw pod success
Feb 16 11:39:09.232: INFO: Pod "pod-configmaps-eb6f4e5e-50b0-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:39:09.237: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-eb6f4e5e-50b0-11ea-aa00-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 16 11:39:09.512: INFO: Waiting for pod pod-configmaps-eb6f4e5e-50b0-11ea-aa00-0242ac110008 to disappear
Feb 16 11:39:09.531: INFO: Pod pod-configmaps-eb6f4e5e-50b0-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:39:09.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-7dwqb" for this suite.
Feb 16 11:39:15.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:39:15.779: INFO: namespace: e2e-tests-configmap-7dwqb, resource: bindings, ignored listing per whitelist
Feb 16 11:39:15.828: INFO: namespace e2e-tests-configmap-7dwqb deletion completed in 6.283779747s

• [SLOW TEST:17.851 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:39:15.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-f600f966-50b0-11ea-aa00-0242ac110008
STEP: Creating configMap with name cm-test-opt-upd-f600fa87-50b0-11ea-aa00-0242ac110008
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-f600f966-50b0-11ea-aa00-0242ac110008
STEP: Updating configmap cm-test-opt-upd-f600fa87-50b0-11ea-aa00-0242ac110008
STEP: Creating configMap with name cm-test-opt-create-f600faef-50b0-11ea-aa00-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:39:36.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-s9pf8" for this suite.
Feb 16 11:40:06.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:40:06.759: INFO: namespace: e2e-tests-configmap-s9pf8, resource: bindings, ignored listing per whitelist
Feb 16 11:40:06.782: INFO: namespace e2e-tests-configmap-s9pf8 deletion completed in 30.356204407s

• [SLOW TEST:50.954 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:40:06.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 16 11:40:06.989: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 16 11:40:12.010: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 16 11:40:20.041: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 16 11:40:20.083: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-d9lzw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d9lzw/deployments/test-cleanup-deployment,UID:1c2f6d2e-50b1-11ea-a994-fa163e34d433,ResourceVersion:21860727,Generation:1,CreationTimestamp:2020-02-16 11:40:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb 16 11:40:20.094: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:40:20.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-d9lzw" for this suite.
Feb 16 11:40:28.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:40:28.449: INFO: namespace: e2e-tests-deployment-d9lzw, resource: bindings, ignored listing per whitelist
Feb 16 11:40:28.597: INFO: namespace e2e-tests-deployment-d9lzw deletion completed in 8.407917538s

• [SLOW TEST:21.815 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:40:28.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Feb 16 11:40:29.001: INFO: Waiting up to 5m0s for pod "client-containers-21740eea-50b1-11ea-aa00-0242ac110008" in namespace "e2e-tests-containers-bvbz8" to be "success or failure"
Feb 16 11:40:29.024: INFO: Pod "client-containers-21740eea-50b1-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 22.659748ms
Feb 16 11:40:32.083: INFO: Pod "client-containers-21740eea-50b1-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.081100802s
Feb 16 11:40:34.102: INFO: Pod "client-containers-21740eea-50b1-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.100265235s
Feb 16 11:40:36.573: INFO: Pod "client-containers-21740eea-50b1-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.570923203s
Feb 16 11:40:38.619: INFO: Pod "client-containers-21740eea-50b1-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.617346527s
Feb 16 11:40:40.659: INFO: Pod "client-containers-21740eea-50b1-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.657621009s
STEP: Saw pod success
Feb 16 11:40:40.659: INFO: Pod "client-containers-21740eea-50b1-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:40:40.668: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-21740eea-50b1-11ea-aa00-0242ac110008 container test-container: 
STEP: delete the pod
Feb 16 11:40:40.970: INFO: Waiting for pod client-containers-21740eea-50b1-11ea-aa00-0242ac110008 to disappear
Feb 16 11:40:40.991: INFO: Pod client-containers-21740eea-50b1-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:40:40.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-bvbz8" for this suite.
Feb 16 11:40:47.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:40:47.246: INFO: namespace: e2e-tests-containers-bvbz8, resource: bindings, ignored listing per whitelist
Feb 16 11:40:47.251: INFO: namespace e2e-tests-containers-bvbz8 deletion completed in 6.24756116s

• [SLOW TEST:18.653 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:40:47.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 16 11:40:47.501: INFO: Waiting up to 5m0s for pod "pod-2c88ed85-50b1-11ea-aa00-0242ac110008" in namespace "e2e-tests-emptydir-sfg8g" to be "success or failure"
Feb 16 11:40:47.511: INFO: Pod "pod-2c88ed85-50b1-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.325643ms
Feb 16 11:40:49.530: INFO: Pod "pod-2c88ed85-50b1-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029092542s
Feb 16 11:40:51.548: INFO: Pod "pod-2c88ed85-50b1-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047266271s
Feb 16 11:40:53.808: INFO: Pod "pod-2c88ed85-50b1-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.30724515s
Feb 16 11:40:55.823: INFO: Pod "pod-2c88ed85-50b1-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.322446618s
Feb 16 11:40:57.840: INFO: Pod "pod-2c88ed85-50b1-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.339641566s
STEP: Saw pod success
Feb 16 11:40:57.841: INFO: Pod "pod-2c88ed85-50b1-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:40:57.846: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-2c88ed85-50b1-11ea-aa00-0242ac110008 container test-container: 
STEP: delete the pod
Feb 16 11:40:58.624: INFO: Waiting for pod pod-2c88ed85-50b1-11ea-aa00-0242ac110008 to disappear
Feb 16 11:40:58.643: INFO: Pod pod-2c88ed85-50b1-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:40:58.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-sfg8g" for this suite.
Feb 16 11:41:04.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:41:04.857: INFO: namespace: e2e-tests-emptydir-sfg8g, resource: bindings, ignored listing per whitelist
Feb 16 11:41:04.939: INFO: namespace e2e-tests-emptydir-sfg8g deletion completed in 6.281637866s

• [SLOW TEST:17.687 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:41:04.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 16 11:41:05.274: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-sw89c,SelfLink:/api/v1/namespaces/e2e-tests-watch-sw89c/configmaps/e2e-watch-test-watch-closed,UID:371f10d4-50b1-11ea-a994-fa163e34d433,ResourceVersion:21860892,Generation:0,CreationTimestamp:2020-02-16 11:41:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 16 11:41:05.274: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-sw89c,SelfLink:/api/v1/namespaces/e2e-tests-watch-sw89c/configmaps/e2e-watch-test-watch-closed,UID:371f10d4-50b1-11ea-a994-fa163e34d433,ResourceVersion:21860893,Generation:0,CreationTimestamp:2020-02-16 11:41:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 16 11:41:05.391: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-sw89c,SelfLink:/api/v1/namespaces/e2e-tests-watch-sw89c/configmaps/e2e-watch-test-watch-closed,UID:371f10d4-50b1-11ea-a994-fa163e34d433,ResourceVersion:21860894,Generation:0,CreationTimestamp:2020-02-16 11:41:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 16 11:41:05.391: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-sw89c,SelfLink:/api/v1/namespaces/e2e-tests-watch-sw89c/configmaps/e2e-watch-test-watch-closed,UID:371f10d4-50b1-11ea-a994-fa163e34d433,ResourceVersion:21860895,Generation:0,CreationTimestamp:2020-02-16 11:41:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:41:05.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-sw89c" for this suite.
Feb 16 11:41:11.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:41:11.582: INFO: namespace: e2e-tests-watch-sw89c, resource: bindings, ignored listing per whitelist
Feb 16 11:41:11.605: INFO: namespace e2e-tests-watch-sw89c deletion completed in 6.202259362s

• [SLOW TEST:6.665 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:41:11.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-kd6p
STEP: Creating a pod to test atomic-volume-subpath
Feb 16 11:41:11.999: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kd6p" in namespace "e2e-tests-subpath-8slgj" to be "success or failure"
Feb 16 11:41:12.021: INFO: Pod "pod-subpath-test-configmap-kd6p": Phase="Pending", Reason="", readiness=false. Elapsed: 21.711139ms
Feb 16 11:41:14.037: INFO: Pod "pod-subpath-test-configmap-kd6p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037676137s
Feb 16 11:41:16.067: INFO: Pod "pod-subpath-test-configmap-kd6p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068057207s
Feb 16 11:41:18.082: INFO: Pod "pod-subpath-test-configmap-kd6p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082770516s
Feb 16 11:41:20.124: INFO: Pod "pod-subpath-test-configmap-kd6p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124318987s
Feb 16 11:41:22.148: INFO: Pod "pod-subpath-test-configmap-kd6p": Phase="Pending", Reason="", readiness=false. Elapsed: 10.148335484s
Feb 16 11:41:24.165: INFO: Pod "pod-subpath-test-configmap-kd6p": Phase="Pending", Reason="", readiness=false. Elapsed: 12.165528528s
Feb 16 11:41:26.178: INFO: Pod "pod-subpath-test-configmap-kd6p": Phase="Pending", Reason="", readiness=false. Elapsed: 14.179158534s
Feb 16 11:41:28.200: INFO: Pod "pod-subpath-test-configmap-kd6p": Phase="Running", Reason="", readiness=false. Elapsed: 16.200745046s
Feb 16 11:41:30.225: INFO: Pod "pod-subpath-test-configmap-kd6p": Phase="Running", Reason="", readiness=false. Elapsed: 18.22573332s
Feb 16 11:41:32.241: INFO: Pod "pod-subpath-test-configmap-kd6p": Phase="Running", Reason="", readiness=false. Elapsed: 20.241262821s
Feb 16 11:41:34.250: INFO: Pod "pod-subpath-test-configmap-kd6p": Phase="Running", Reason="", readiness=false. Elapsed: 22.251115523s
Feb 16 11:41:36.265: INFO: Pod "pod-subpath-test-configmap-kd6p": Phase="Running", Reason="", readiness=false. Elapsed: 24.266068398s
Feb 16 11:41:38.285: INFO: Pod "pod-subpath-test-configmap-kd6p": Phase="Running", Reason="", readiness=false. Elapsed: 26.28569583s
Feb 16 11:41:40.306: INFO: Pod "pod-subpath-test-configmap-kd6p": Phase="Running", Reason="", readiness=false. Elapsed: 28.306178852s
Feb 16 11:41:42.320: INFO: Pod "pod-subpath-test-configmap-kd6p": Phase="Running", Reason="", readiness=false. Elapsed: 30.320200765s
Feb 16 11:41:44.460: INFO: Pod "pod-subpath-test-configmap-kd6p": Phase="Running", Reason="", readiness=false. Elapsed: 32.46054998s
Feb 16 11:41:46.870: INFO: Pod "pod-subpath-test-configmap-kd6p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.871021143s
STEP: Saw pod success
Feb 16 11:41:46.870: INFO: Pod "pod-subpath-test-configmap-kd6p" satisfied condition "success or failure"
Feb 16 11:41:46.878: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-kd6p container test-container-subpath-configmap-kd6p: 
STEP: delete the pod
Feb 16 11:41:47.169: INFO: Waiting for pod pod-subpath-test-configmap-kd6p to disappear
Feb 16 11:41:47.199: INFO: Pod pod-subpath-test-configmap-kd6p no longer exists
STEP: Deleting pod pod-subpath-test-configmap-kd6p
Feb 16 11:41:47.199: INFO: Deleting pod "pod-subpath-test-configmap-kd6p" in namespace "e2e-tests-subpath-8slgj"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:41:47.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-8slgj" for this suite.
Feb 16 11:41:55.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:41:55.560: INFO: namespace: e2e-tests-subpath-8slgj, resource: bindings, ignored listing per whitelist
Feb 16 11:41:55.595: INFO: namespace e2e-tests-subpath-8slgj deletion completed in 8.199774479s

• [SLOW TEST:43.990 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:41:55.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0216 11:42:05.867260       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 16 11:42:05.867: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:42:05.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-wnpkw" for this suite.
Feb 16 11:42:11.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:42:12.088: INFO: namespace: e2e-tests-gc-wnpkw, resource: bindings, ignored listing per whitelist
Feb 16 11:42:12.095: INFO: namespace e2e-tests-gc-wnpkw deletion completed in 6.216387324s

• [SLOW TEST:16.500 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:42:12.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 16 11:42:12.415: INFO: Number of nodes with available pods: 0
Feb 16 11:42:12.415: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 11:42:14.181: INFO: Number of nodes with available pods: 0
Feb 16 11:42:14.181: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 11:42:14.997: INFO: Number of nodes with available pods: 0
Feb 16 11:42:14.997: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 11:42:15.625: INFO: Number of nodes with available pods: 0
Feb 16 11:42:15.625: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 11:42:16.501: INFO: Number of nodes with available pods: 0
Feb 16 11:42:16.501: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 11:42:17.437: INFO: Number of nodes with available pods: 0
Feb 16 11:42:17.437: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 11:42:18.758: INFO: Number of nodes with available pods: 0
Feb 16 11:42:18.758: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 11:42:20.109: INFO: Number of nodes with available pods: 0
Feb 16 11:42:20.110: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 11:42:20.702: INFO: Number of nodes with available pods: 0
Feb 16 11:42:20.702: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 11:42:21.449: INFO: Number of nodes with available pods: 0
Feb 16 11:42:21.449: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 11:42:22.442: INFO: Number of nodes with available pods: 1
Feb 16 11:42:22.442: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 16 11:42:22.627: INFO: Number of nodes with available pods: 1
Feb 16 11:42:22.627: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-d64lb, will wait for the garbage collector to delete the pods
Feb 16 11:42:23.764: INFO: Deleting DaemonSet.extensions daemon-set took: 16.175106ms
Feb 16 11:42:24.665: INFO: Terminating DaemonSet.extensions daemon-set pods took: 900.890231ms
Feb 16 11:42:30.072: INFO: Number of nodes with available pods: 0
Feb 16 11:42:30.072: INFO: Number of running nodes: 0, number of available pods: 0
Feb 16 11:42:30.078: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-d64lb/daemonsets","resourceVersion":"21861111"},"items":null}

Feb 16 11:42:30.083: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-d64lb/pods","resourceVersion":"21861111"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:42:30.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-d64lb" for this suite.
Feb 16 11:42:36.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:42:36.209: INFO: namespace: e2e-tests-daemonsets-d64lb, resource: bindings, ignored listing per whitelist
Feb 16 11:42:36.312: INFO: namespace e2e-tests-daemonsets-d64lb deletion completed in 6.172410148s

• [SLOW TEST:24.216 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:42:36.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Feb 16 11:42:46.640: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-6d7b904f-50b1-11ea-aa00-0242ac110008", GenerateName:"", Namespace:"e2e-tests-pods-wxhws", SelfLink:"/api/v1/namespaces/e2e-tests-pods-wxhws/pods/pod-submit-remove-6d7b904f-50b1-11ea-aa00-0242ac110008", UID:"6d7d6734-50b1-11ea-a994-fa163e34d433", ResourceVersion:"21861157", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717450156, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"446965136", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-rp28c", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001c07d80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rp28c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00070dce8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0019e2900), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00070dd20)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00070dd40)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00070dd48), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00070dd4c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717450156, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717450165, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717450165, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717450156, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000d3e8e0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000d3e900), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://1ba822c7f6bd6137f65d46f09b72c43adbc2c6e0d6dc5636b769af8a9fbfa785"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:43:02.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-wxhws" for this suite.
Feb 16 11:43:08.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:43:08.813: INFO: namespace: e2e-tests-pods-wxhws, resource: bindings, ignored listing per whitelist
Feb 16 11:43:08.922: INFO: namespace e2e-tests-pods-wxhws deletion completed in 6.231411415s

• [SLOW TEST:32.609 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:43:08.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Feb 16 11:43:09.614: INFO: created pod pod-service-account-defaultsa
Feb 16 11:43:09.614: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 16 11:43:09.637: INFO: created pod pod-service-account-mountsa
Feb 16 11:43:09.637: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 16 11:43:09.667: INFO: created pod pod-service-account-nomountsa
Feb 16 11:43:09.667: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 16 11:43:09.885: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 16 11:43:09.885: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 16 11:43:09.919: INFO: created pod pod-service-account-mountsa-mountspec
Feb 16 11:43:09.919: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 16 11:43:10.010: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 16 11:43:10.011: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 16 11:43:10.047: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 16 11:43:10.047: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 16 11:43:10.118: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 16 11:43:10.118: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 16 11:43:10.244: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 16 11:43:10.244: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:43:10.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-wg8wc" for this suite.
Feb 16 11:43:41.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:43:42.051: INFO: namespace: e2e-tests-svcaccounts-wg8wc, resource: bindings, ignored listing per whitelist
Feb 16 11:43:42.091: INFO: namespace e2e-tests-svcaccounts-wg8wc deletion completed in 28.948408112s

• [SLOW TEST:33.169 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:43:42.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 16 11:43:42.274: INFO: Waiting up to 5m0s for pod "pod-94b5f935-50b1-11ea-aa00-0242ac110008" in namespace "e2e-tests-emptydir-8cctj" to be "success or failure"
Feb 16 11:43:42.288: INFO: Pod "pod-94b5f935-50b1-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.362661ms
Feb 16 11:43:44.758: INFO: Pod "pod-94b5f935-50b1-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.483199225s
Feb 16 11:43:46.794: INFO: Pod "pod-94b5f935-50b1-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.519966013s
Feb 16 11:43:49.155: INFO: Pod "pod-94b5f935-50b1-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.880704395s
Feb 16 11:43:51.261: INFO: Pod "pod-94b5f935-50b1-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.986756875s
Feb 16 11:43:53.280: INFO: Pod "pod-94b5f935-50b1-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.005777658s
Feb 16 11:43:55.296: INFO: Pod "pod-94b5f935-50b1-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.021361665s
Feb 16 11:43:57.315: INFO: Pod "pod-94b5f935-50b1-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.040154836s
STEP: Saw pod success
Feb 16 11:43:57.315: INFO: Pod "pod-94b5f935-50b1-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:43:57.321: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-94b5f935-50b1-11ea-aa00-0242ac110008 container test-container: 
STEP: delete the pod
Feb 16 11:43:58.342: INFO: Waiting for pod pod-94b5f935-50b1-11ea-aa00-0242ac110008 to disappear
Feb 16 11:43:58.371: INFO: Pod pod-94b5f935-50b1-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:43:58.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8cctj" for this suite.
Feb 16 11:44:06.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:44:06.739: INFO: namespace: e2e-tests-emptydir-8cctj, resource: bindings, ignored listing per whitelist
Feb 16 11:44:06.899: INFO: namespace e2e-tests-emptydir-8cctj deletion completed in 8.516557968s

• [SLOW TEST:24.808 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:44:06.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-5gm5
STEP: Creating a pod to test atomic-volume-subpath
Feb 16 11:44:07.207: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5gm5" in namespace "e2e-tests-subpath-cxbqn" to be "success or failure"
Feb 16 11:44:07.289: INFO: Pod "pod-subpath-test-downwardapi-5gm5": Phase="Pending", Reason="", readiness=false. Elapsed: 82.839651ms
Feb 16 11:44:09.306: INFO: Pod "pod-subpath-test-downwardapi-5gm5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099016022s
Feb 16 11:44:11.355: INFO: Pod "pod-subpath-test-downwardapi-5gm5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148627537s
Feb 16 11:44:14.111: INFO: Pod "pod-subpath-test-downwardapi-5gm5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.904549052s
Feb 16 11:44:16.136: INFO: Pod "pod-subpath-test-downwardapi-5gm5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.929489315s
Feb 16 11:44:18.159: INFO: Pod "pod-subpath-test-downwardapi-5gm5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.952461762s
Feb 16 11:44:20.254: INFO: Pod "pod-subpath-test-downwardapi-5gm5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.047408998s
Feb 16 11:44:22.572: INFO: Pod "pod-subpath-test-downwardapi-5gm5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.365419586s
Feb 16 11:44:25.139: INFO: Pod "pod-subpath-test-downwardapi-5gm5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.932105239s
Feb 16 11:44:27.150: INFO: Pod "pod-subpath-test-downwardapi-5gm5": Phase="Running", Reason="", readiness=false. Elapsed: 19.943043917s
Feb 16 11:44:29.169: INFO: Pod "pod-subpath-test-downwardapi-5gm5": Phase="Running", Reason="", readiness=false. Elapsed: 21.962507964s
Feb 16 11:44:31.187: INFO: Pod "pod-subpath-test-downwardapi-5gm5": Phase="Running", Reason="", readiness=false. Elapsed: 23.980200716s
Feb 16 11:44:33.200: INFO: Pod "pod-subpath-test-downwardapi-5gm5": Phase="Running", Reason="", readiness=false. Elapsed: 25.993573172s
Feb 16 11:44:35.212: INFO: Pod "pod-subpath-test-downwardapi-5gm5": Phase="Running", Reason="", readiness=false. Elapsed: 28.005763221s
Feb 16 11:44:37.227: INFO: Pod "pod-subpath-test-downwardapi-5gm5": Phase="Running", Reason="", readiness=false. Elapsed: 30.020670622s
Feb 16 11:44:39.247: INFO: Pod "pod-subpath-test-downwardapi-5gm5": Phase="Running", Reason="", readiness=false. Elapsed: 32.040259313s
Feb 16 11:44:41.274: INFO: Pod "pod-subpath-test-downwardapi-5gm5": Phase="Running", Reason="", readiness=false. Elapsed: 34.067211877s
Feb 16 11:44:43.289: INFO: Pod "pod-subpath-test-downwardapi-5gm5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.082137166s
STEP: Saw pod success
Feb 16 11:44:43.289: INFO: Pod "pod-subpath-test-downwardapi-5gm5" satisfied condition "success or failure"
Feb 16 11:44:43.294: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-5gm5 container test-container-subpath-downwardapi-5gm5: 
STEP: delete the pod
Feb 16 11:44:43.455: INFO: Waiting for pod pod-subpath-test-downwardapi-5gm5 to disappear
Feb 16 11:44:43.480: INFO: Pod pod-subpath-test-downwardapi-5gm5 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-5gm5
Feb 16 11:44:43.480: INFO: Deleting pod "pod-subpath-test-downwardapi-5gm5" in namespace "e2e-tests-subpath-cxbqn"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:44:43.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-cxbqn" for this suite.
Feb 16 11:44:49.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:44:49.869: INFO: namespace: e2e-tests-subpath-cxbqn, resource: bindings, ignored listing per whitelist
Feb 16 11:44:49.907: INFO: namespace e2e-tests-subpath-cxbqn deletion completed in 6.416758303s

• [SLOW TEST:43.008 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:44:49.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 16 11:44:50.099: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 16 11:44:50.198: INFO: Waiting for terminating namespaces to be deleted...
Feb 16 11:44:50.203: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 16 11:44:50.219: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 16 11:44:50.219: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 16 11:44:50.219: INFO: 	Container weave ready: true, restart count 0
Feb 16 11:44:50.219: INFO: 	Container weave-npc ready: true, restart count 0
Feb 16 11:44:50.219: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 16 11:44:50.219: INFO: 	Container coredns ready: true, restart count 0
Feb 16 11:44:50.219: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 16 11:44:50.219: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 16 11:44:50.219: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 16 11:44:50.219: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 16 11:44:50.219: INFO: 	Container coredns ready: true, restart count 0
Feb 16 11:44:50.219: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 16 11:44:50.219: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-c33ff7fe-50b1-11ea-aa00-0242ac110008 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-c33ff7fe-50b1-11ea-aa00-0242ac110008 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-c33ff7fe-50b1-11ea-aa00-0242ac110008
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:45:12.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-kwfqn" for this suite.
Feb 16 11:45:34.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:45:34.932: INFO: namespace: e2e-tests-sched-pred-kwfqn, resource: bindings, ignored listing per whitelist
Feb 16 11:45:34.953: INFO: namespace e2e-tests-sched-pred-kwfqn deletion completed in 22.185976204s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:45.046 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:45:34.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-rzvpw
Feb 16 11:45:45.209: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-rzvpw
STEP: checking the pod's current state and verifying that restartCount is present
Feb 16 11:45:45.213: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:49:45.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-rzvpw" for this suite.
Feb 16 11:49:51.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:49:52.018: INFO: namespace: e2e-tests-container-probe-rzvpw, resource: bindings, ignored listing per whitelist
Feb 16 11:49:52.106: INFO: namespace e2e-tests-container-probe-rzvpw deletion completed in 6.289196681s

• [SLOW TEST:257.153 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:49:52.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-714b7a86-50b2-11ea-aa00-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 16 11:49:52.366: INFO: Waiting up to 5m0s for pod "pod-configmaps-714d4dc1-50b2-11ea-aa00-0242ac110008" in namespace "e2e-tests-configmap-6hm8z" to be "success or failure"
Feb 16 11:49:52.457: INFO: Pod "pod-configmaps-714d4dc1-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 90.520556ms
Feb 16 11:49:55.189: INFO: Pod "pod-configmaps-714d4dc1-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.822667992s
Feb 16 11:49:57.211: INFO: Pod "pod-configmaps-714d4dc1-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.844624198s
Feb 16 11:49:59.516: INFO: Pod "pod-configmaps-714d4dc1-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.149959129s
Feb 16 11:50:01.532: INFO: Pod "pod-configmaps-714d4dc1-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.16583682s
Feb 16 11:50:03.568: INFO: Pod "pod-configmaps-714d4dc1-50b2-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.20172132s
STEP: Saw pod success
Feb 16 11:50:03.568: INFO: Pod "pod-configmaps-714d4dc1-50b2-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:50:03.595: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-714d4dc1-50b2-11ea-aa00-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 16 11:50:03.786: INFO: Waiting for pod pod-configmaps-714d4dc1-50b2-11ea-aa00-0242ac110008 to disappear
Feb 16 11:50:03.879: INFO: Pod pod-configmaps-714d4dc1-50b2-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:50:03.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-6hm8z" for this suite.
Feb 16 11:50:09.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:50:10.031: INFO: namespace: e2e-tests-configmap-6hm8z, resource: bindings, ignored listing per whitelist
Feb 16 11:50:10.048: INFO: namespace e2e-tests-configmap-6hm8z deletion completed in 6.15197759s

• [SLOW TEST:17.941 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:50:10.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 16 11:50:10.245: INFO: Waiting up to 5m0s for pod "pod-7bf44fc1-50b2-11ea-aa00-0242ac110008" in namespace "e2e-tests-emptydir-wlvmg" to be "success or failure"
Feb 16 11:50:10.398: INFO: Pod "pod-7bf44fc1-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 152.704741ms
Feb 16 11:50:12.879: INFO: Pod "pod-7bf44fc1-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.634261604s
Feb 16 11:50:14.890: INFO: Pod "pod-7bf44fc1-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.645274496s
Feb 16 11:50:16.899: INFO: Pod "pod-7bf44fc1-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.653808434s
Feb 16 11:50:19.817: INFO: Pod "pod-7bf44fc1-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.572084619s
Feb 16 11:50:21.840: INFO: Pod "pod-7bf44fc1-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.595630263s
Feb 16 11:50:23.858: INFO: Pod "pod-7bf44fc1-50b2-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.613568267s
STEP: Saw pod success
Feb 16 11:50:23.859: INFO: Pod "pod-7bf44fc1-50b2-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:50:23.869: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7bf44fc1-50b2-11ea-aa00-0242ac110008 container test-container: 
STEP: delete the pod
Feb 16 11:50:24.349: INFO: Waiting for pod pod-7bf44fc1-50b2-11ea-aa00-0242ac110008 to disappear
Feb 16 11:50:24.617: INFO: Pod pod-7bf44fc1-50b2-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:50:24.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wlvmg" for this suite.
Feb 16 11:50:30.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:50:30.751: INFO: namespace: e2e-tests-emptydir-wlvmg, resource: bindings, ignored listing per whitelist
Feb 16 11:50:30.825: INFO: namespace e2e-tests-emptydir-wlvmg deletion completed in 6.195818033s

• [SLOW TEST:20.777 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:50:30.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 16 11:50:31.137: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-vxcsl,SelfLink:/api/v1/namespaces/e2e-tests-watch-vxcsl/configmaps/e2e-watch-test-resource-version,UID:8858a479-50b2-11ea-a994-fa163e34d433,ResourceVersion:21861993,Generation:0,CreationTimestamp:2020-02-16 11:50:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 16 11:50:31.137: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-vxcsl,SelfLink:/api/v1/namespaces/e2e-tests-watch-vxcsl/configmaps/e2e-watch-test-resource-version,UID:8858a479-50b2-11ea-a994-fa163e34d433,ResourceVersion:21861994,Generation:0,CreationTimestamp:2020-02-16 11:50:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:50:31.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-vxcsl" for this suite.
Feb 16 11:50:37.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:50:37.314: INFO: namespace: e2e-tests-watch-vxcsl, resource: bindings, ignored listing per whitelist
Feb 16 11:50:37.413: INFO: namespace e2e-tests-watch-vxcsl deletion completed in 6.271195634s

• [SLOW TEST:6.587 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:50:37.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0216 11:51:18.396748       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 16 11:51:18.397: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:51:18.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-49v9d" for this suite.
Feb 16 11:51:42.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:51:42.618: INFO: namespace: e2e-tests-gc-49v9d, resource: bindings, ignored listing per whitelist
Feb 16 11:51:42.678: INFO: namespace e2e-tests-gc-49v9d deletion completed in 24.27339282s

• [SLOW TEST:65.265 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:51:42.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 16 11:51:42.886: INFO: Waiting up to 5m0s for pod "pod-b32ca36e-50b2-11ea-aa00-0242ac110008" in namespace "e2e-tests-emptydir-q4vmt" to be "success or failure"
Feb 16 11:51:42.908: INFO: Pod "pod-b32ca36e-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 22.341331ms
Feb 16 11:51:45.155: INFO: Pod "pod-b32ca36e-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.269422054s
Feb 16 11:51:47.168: INFO: Pod "pod-b32ca36e-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282270245s
Feb 16 11:51:49.216: INFO: Pod "pod-b32ca36e-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.330713563s
Feb 16 11:51:51.228: INFO: Pod "pod-b32ca36e-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.342788199s
Feb 16 11:51:53.250: INFO: Pod "pod-b32ca36e-50b2-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.363948903s
STEP: Saw pod success
Feb 16 11:51:53.250: INFO: Pod "pod-b32ca36e-50b2-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:51:53.258: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b32ca36e-50b2-11ea-aa00-0242ac110008 container test-container: 
STEP: delete the pod
Feb 16 11:51:53.488: INFO: Waiting for pod pod-b32ca36e-50b2-11ea-aa00-0242ac110008 to disappear
Feb 16 11:51:53.553: INFO: Pod pod-b32ca36e-50b2-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:51:53.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-q4vmt" for this suite.
Feb 16 11:51:59.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:51:59.823: INFO: namespace: e2e-tests-emptydir-q4vmt, resource: bindings, ignored listing per whitelist
Feb 16 11:51:59.947: INFO: namespace e2e-tests-emptydir-q4vmt deletion completed in 6.333003686s

• [SLOW TEST:17.269 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:51:59.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 16 11:52:00.159: INFO: Waiting up to 5m0s for pod "pod-bd788382-50b2-11ea-aa00-0242ac110008" in namespace "e2e-tests-emptydir-phkfw" to be "success or failure"
Feb 16 11:52:00.168: INFO: Pod "pod-bd788382-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.85023ms
Feb 16 11:52:02.208: INFO: Pod "pod-bd788382-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04845105s
Feb 16 11:52:04.228: INFO: Pod "pod-bd788382-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068875432s
Feb 16 11:52:06.240: INFO: Pod "pod-bd788382-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080646332s
Feb 16 11:52:08.264: INFO: Pod "pod-bd788382-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105365801s
Feb 16 11:52:10.630: INFO: Pod "pod-bd788382-50b2-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.470469528s
Feb 16 11:52:12.640: INFO: Pod "pod-bd788382-50b2-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.481038845s
STEP: Saw pod success
Feb 16 11:52:12.640: INFO: Pod "pod-bd788382-50b2-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 11:52:12.643: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-bd788382-50b2-11ea-aa00-0242ac110008 container test-container: 
STEP: delete the pod
Feb 16 11:52:13.148: INFO: Waiting for pod pod-bd788382-50b2-11ea-aa00-0242ac110008 to disappear
Feb 16 11:52:13.180: INFO: Pod pod-bd788382-50b2-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:52:13.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-phkfw" for this suite.
Feb 16 11:52:19.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:52:19.318: INFO: namespace: e2e-tests-emptydir-phkfw, resource: bindings, ignored listing per whitelist
Feb 16 11:52:19.410: INFO: namespace e2e-tests-emptydir-phkfw deletion completed in 6.190644205s

• [SLOW TEST:19.463 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:52:19.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 16 11:52:19.592: INFO: Creating deployment "nginx-deployment"
Feb 16 11:52:19.598: INFO: Waiting for observed generation 1
Feb 16 11:52:21.821: INFO: Waiting for all required pods to come up
Feb 16 11:52:22.059: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 16 11:53:06.253: INFO: Waiting for deployment "nginx-deployment" to complete
Feb 16 11:53:06.267: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb 16 11:53:06.298: INFO: Updating deployment nginx-deployment
Feb 16 11:53:06.298: INFO: Waiting for observed generation 2
Feb 16 11:53:11.549: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 16 11:53:11.582: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 16 11:53:11.612: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 16 11:53:12.253: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 16 11:53:12.253: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 16 11:53:12.288: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 16 11:53:12.312: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb 16 11:53:12.313: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb 16 11:53:12.330: INFO: Updating deployment nginx-deployment
Feb 16 11:53:12.330: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb 16 11:53:13.172: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 16 11:53:17.080: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 16 11:53:17.888: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cnxlw/deployments/nginx-deployment,UID:c910d3a8-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862582,Generation:3,CreationTimestamp:2020-02-16 11:52:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-16 11:53:07 +0000 UTC 2020-02-16 11:52:19 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-02-16 11:53:15 +0000 UTC 2020-02-16 11:53:15 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb 16 11:53:18.141: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cnxlw/replicasets/nginx-deployment-5c98f8fb5,UID:e4e84cb7-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862576,Generation:3,CreationTimestamp:2020-02-16 11:53:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c910d3a8-50b2-11ea-a994-fa163e34d433 0xc002151137 0xc002151138}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 16 11:53:18.141: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb 16 11:53:18.141: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cnxlw/replicasets/nginx-deployment-85ddf47c5d,UID:c92531c3-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862573,Generation:3,CreationTimestamp:2020-02-16 11:52:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c910d3a8-50b2-11ea-a994-fa163e34d433 0xc0021511f7 0xc0021511f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb 16 11:53:19.210: INFO: Pod "nginx-deployment-5c98f8fb5-68b45" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-68b45,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-5c98f8fb5-68b45,UID:e508b76c-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862562,Generation:0,CreationTimestamp:2020-02-16 11:53:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e4e84cb7-50b2-11ea-a994-fa163e34d433 0xc0015e2d77 0xc0015e2d78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015e2de0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015e2e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-16 11:53:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.210: INFO: Pod "nginx-deployment-5c98f8fb5-767gs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-767gs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-5c98f8fb5-767gs,UID:ebfe0ae5-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862607,Generation:0,CreationTimestamp:2020-02-16 11:53:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e4e84cb7-50b2-11ea-a994-fa163e34d433 0xc0015e2ed7 0xc0015e2ed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015e2f50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015e2f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.210: INFO: Pod "nginx-deployment-5c98f8fb5-cgttx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cgttx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-5c98f8fb5-cgttx,UID:eb5587dd-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862586,Generation:0,CreationTimestamp:2020-02-16 11:53:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e4e84cb7-50b2-11ea-a994-fa163e34d433 0xc0015e2fd0 0xc0015e2fd1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015e30b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015e30d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.210: INFO: Pod "nginx-deployment-5c98f8fb5-gqn6q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gqn6q,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-5c98f8fb5-gqn6q,UID:e508b1b5-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862564,Generation:0,CreationTimestamp:2020-02-16 11:53:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e4e84cb7-50b2-11ea-a994-fa163e34d433 0xc0015e3157 0xc0015e3158}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015e31c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024068c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-16 11:53:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.210: INFO: Pod "nginx-deployment-5c98f8fb5-hwgdc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hwgdc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-5c98f8fb5-hwgdc,UID:ec02821a-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862602,Generation:0,CreationTimestamp:2020-02-16 11:53:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e4e84cb7-50b2-11ea-a994-fa163e34d433 0xc002406987 0xc002406988}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002406ea0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002406ec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.211: INFO: Pod "nginx-deployment-5c98f8fb5-k5764" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-k5764,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-5c98f8fb5-k5764,UID:ebfdc951-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862599,Generation:0,CreationTimestamp:2020-02-16 11:53:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e4e84cb7-50b2-11ea-a994-fa163e34d433 0xc002406f30 0xc002406f31}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002406fa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002406fc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.211: INFO: Pod "nginx-deployment-5c98f8fb5-kl44f" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kl44f,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-5c98f8fb5-kl44f,UID:e4fca45c-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862553,Generation:0,CreationTimestamp:2020-02-16 11:53:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e4e84cb7-50b2-11ea-a994-fa163e34d433 0xc0024070a0 0xc0024070a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002407130} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002407150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-16 11:53:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.211: INFO: Pod "nginx-deployment-5c98f8fb5-mbhmg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mbhmg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-5c98f8fb5-mbhmg,UID:ebd49fe9-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862608,Generation:0,CreationTimestamp:2020-02-16 11:53:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e4e84cb7-50b2-11ea-a994-fa163e34d433 0xc002407267 0xc002407268}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024072d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024072f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.211: INFO: Pod "nginx-deployment-5c98f8fb5-pbbrb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pbbrb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-5c98f8fb5-pbbrb,UID:e5875b27-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862569,Generation:0,CreationTimestamp:2020-02-16 11:53:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e4e84cb7-50b2-11ea-a994-fa163e34d433 0xc0024073b7 0xc0024073b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002407440} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002407530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:07 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-16 11:53:08 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.212: INFO: Pod "nginx-deployment-5c98f8fb5-smr2n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-smr2n,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-5c98f8fb5-smr2n,UID:ebfe309a-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862600,Generation:0,CreationTimestamp:2020-02-16 11:53:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e4e84cb7-50b2-11ea-a994-fa163e34d433 0xc002407717 0xc002407718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024077e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002407810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.212: INFO: Pod "nginx-deployment-5c98f8fb5-svr2c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-svr2c,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-5c98f8fb5-svr2c,UID:e57503ea-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862566,Generation:0,CreationTimestamp:2020-02-16 11:53:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e4e84cb7-50b2-11ea-a994-fa163e34d433 0xc0024078a0 0xc0024078a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024079d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024079f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:07 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-16 11:53:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.212: INFO: Pod "nginx-deployment-5c98f8fb5-xd5tq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xd5tq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-5c98f8fb5-xd5tq,UID:ebd45f24-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862605,Generation:0,CreationTimestamp:2020-02-16 11:53:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e4e84cb7-50b2-11ea-a994-fa163e34d433 0xc002407c27 0xc002407c28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002450070} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024500a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.213: INFO: Pod "nginx-deployment-85ddf47c5d-2p86l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2p86l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-85ddf47c5d-2p86l,UID:ebfb2a2c-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862598,Generation:0,CreationTimestamp:2020-02-16 11:53:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c92531c3-50b2-11ea-a994-fa163e34d433 0xc002450117 0xc002450118}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002450440} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002450460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.213: INFO: Pod "nginx-deployment-85ddf47c5d-4plkh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4plkh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-85ddf47c5d-4plkh,UID:c9487879-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862492,Generation:0,CreationTimestamp:2020-02-16 11:52:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c92531c3-50b2-11ea-a994-fa163e34d433 0xc0024504d0 0xc0024504d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002450540} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002450560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-02-16 11:52:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-16 11:52:58 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://46775d3517f3c264a5d03ad8ddc5adc1cd92c8acbe64c8186b5e81560eda8240}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.213: INFO: Pod "nginx-deployment-85ddf47c5d-4vk9j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4vk9j,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-85ddf47c5d-4vk9j,UID:ebfe102d-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862603,Generation:0,CreationTimestamp:2020-02-16 11:53:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c92531c3-50b2-11ea-a994-fa163e34d433 0xc002450697 0xc002450698}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002450720} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002450740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.214: INFO: Pod "nginx-deployment-85ddf47c5d-6788g" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6788g,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-85ddf47c5d-6788g,UID:c943e600-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862495,Generation:0,CreationTimestamp:2020-02-16 11:52:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c92531c3-50b2-11ea-a994-fa163e34d433 0xc0024507a0 0xc0024507a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002450940} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002450960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-16 11:52:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-16 11:52:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c4831a020a4277ab2b7a50ac40eb9a9ddf6e01517ad251252c729377a3abd663}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.214: INFO: Pod "nginx-deployment-85ddf47c5d-6p5rt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6p5rt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-85ddf47c5d-6p5rt,UID:ebd3edbf-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862604,Generation:0,CreationTimestamp:2020-02-16 11:53:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c92531c3-50b2-11ea-a994-fa163e34d433 0xc002450ac7 0xc002450ac8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002450b30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002450b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.214: INFO: Pod "nginx-deployment-85ddf47c5d-9g68s" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9g68s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-85ddf47c5d-9g68s,UID:ebff6433-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862606,Generation:0,CreationTimestamp:2020-02-16 11:53:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c92531c3-50b2-11ea-a994-fa163e34d433 0xc002450bd7 0xc002450bd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002450c40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002450c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.214: INFO: Pod "nginx-deployment-85ddf47c5d-cphml" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cphml,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-85ddf47c5d-cphml,UID:c932b407-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862487,Generation:0,CreationTimestamp:2020-02-16 11:52:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c92531c3-50b2-11ea-a994-fa163e34d433 0xc002450cc0 0xc002450cc1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002450d20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002450d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-02-16 11:52:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-16 11:52:58 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://816abb3fd0beb3bcb25b6811d24fca847388181d59deeb54ddf79e0c3c7435c3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.214: INFO: Pod "nginx-deployment-85ddf47c5d-dd84z" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dd84z,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-85ddf47c5d-dd84z,UID:eb55646d-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862587,Generation:0,CreationTimestamp:2020-02-16 11:53:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c92531c3-50b2-11ea-a994-fa163e34d433 0xc002450e17 0xc002450e18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002450e80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002450ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.215: INFO: Pod "nginx-deployment-85ddf47c5d-gqc96" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gqc96,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-85ddf47c5d-gqc96,UID:ebd41e18-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862601,Generation:0,CreationTimestamp:2020-02-16 11:53:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c92531c3-50b2-11ea-a994-fa163e34d433 0xc002451057 0xc002451058}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024511d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024511f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.215: INFO: Pod "nginx-deployment-85ddf47c5d-k5lp6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-k5lp6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-85ddf47c5d-k5lp6,UID:ebfd6d19-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862597,Generation:0,CreationTimestamp:2020-02-16 11:53:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c92531c3-50b2-11ea-a994-fa163e34d433 0xc002451267 0xc002451268}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024512d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002451370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.215: INFO: Pod "nginx-deployment-85ddf47c5d-lcx9s" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lcx9s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-85ddf47c5d-lcx9s,UID:c92df7f7-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862480,Generation:0,CreationTimestamp:2020-02-16 11:52:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c92531c3-50b2-11ea-a994-fa163e34d433 0xc002451400 0xc002451401}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002451460} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002451480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-16 11:52:19 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-16 11:52:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://99e814bdf581ae040e6700b8e0aa90d0e73e39d2858434e6de21caed71765b65}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.216: INFO: Pod "nginx-deployment-85ddf47c5d-qhxq9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qhxq9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-85ddf47c5d-qhxq9,UID:c94374c4-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862503,Generation:0,CreationTimestamp:2020-02-16 11:52:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c92531c3-50b2-11ea-a994-fa163e34d433 0xc002451857 0xc002451858}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024518c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024518e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-02-16 11:52:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-16 11:52:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2b2abf97e537a56c99e0269e69bb84863f4f89eb85e39ba283c916d5d206d64d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.216: INFO: Pod "nginx-deployment-85ddf47c5d-ts7xx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ts7xx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-85ddf47c5d-ts7xx,UID:c931d8ee-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862475,Generation:0,CreationTimestamp:2020-02-16 11:52:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c92531c3-50b2-11ea-a994-fa163e34d433 0xc002451d77 0xc002451d78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002451e40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002451e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-02-16 11:52:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-16 11:52:55 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6df2dcecced9f13c8d65df01203eea3bb443ec93f5e1fa79200360c677f667f1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.216: INFO: Pod "nginx-deployment-85ddf47c5d-vdnjx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vdnjx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-85ddf47c5d-vdnjx,UID:c94474b1-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862486,Generation:0,CreationTimestamp:2020-02-16 11:52:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c92531c3-50b2-11ea-a994-fa163e34d433 0xc0019521e7 0xc0019521e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001952380} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0019523a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-02-16 11:52:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-16 11:52:58 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8cabbdcce769e2276a515af85b0673d114c6ca4d9f521faec6c3765d92a3f9a4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 16 11:53:19.216: INFO: Pod "nginx-deployment-85ddf47c5d-wghs5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wghs5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-cnxlw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cnxlw/pods/nginx-deployment-85ddf47c5d-wghs5,UID:c9485a38-50b2-11ea-a994-fa163e34d433,ResourceVersion:21862499,Generation:0,CreationTimestamp:2020-02-16 11:52:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c92531c3-50b2-11ea-a994-fa163e34d433 0xc001952477 0xc001952478}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-m862z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m862z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-m862z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0019524e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001952500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:53:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 11:52:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-02-16 11:52:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-16 11:52:58 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ca84cc7a6782c860a29813bf468e2edc9b591a47eb68e489a5660b5b3cf64a6b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:53:19.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-cnxlw" for this suite.
Feb 16 11:54:12.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:54:12.633: INFO: namespace: e2e-tests-deployment-cnxlw, resource: bindings, ignored listing per whitelist
Feb 16 11:54:14.851: INFO: namespace e2e-tests-deployment-cnxlw deletion completed in 54.962580971s

• [SLOW TEST:115.441 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:54:14.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-m8vbn
Feb 16 11:54:33.314: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-m8vbn
STEP: checking the pod's current state and verifying that restartCount is present
Feb 16 11:54:33.323: INFO: Initial restart count of pod liveness-exec is 0
Feb 16 11:55:24.701: INFO: Restart count of pod e2e-tests-container-probe-m8vbn/liveness-exec is now 1 (51.377488492s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:55:24.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-m8vbn" for this suite.
Feb 16 11:55:30.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:55:31.137: INFO: namespace: e2e-tests-container-probe-m8vbn, resource: bindings, ignored listing per whitelist
Feb 16 11:55:31.141: INFO: namespace e2e-tests-container-probe-m8vbn deletion completed in 6.26706029s

• [SLOW TEST:76.288 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:55:31.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-4bzfx
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 16 11:55:31.342: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 16 11:56:07.702: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-4bzfx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 11:56:07.702: INFO: >>> kubeConfig: /root/.kube/config
I0216 11:56:07.829728       9 log.go:172] (0xc000acaf20) (0xc0022255e0) Create stream
I0216 11:56:07.830075       9 log.go:172] (0xc000acaf20) (0xc0022255e0) Stream added, broadcasting: 1
I0216 11:56:07.842991       9 log.go:172] (0xc000acaf20) Reply frame received for 1
I0216 11:56:07.843178       9 log.go:172] (0xc000acaf20) (0xc0024c3720) Create stream
I0216 11:56:07.843202       9 log.go:172] (0xc000acaf20) (0xc0024c3720) Stream added, broadcasting: 3
I0216 11:56:07.845027       9 log.go:172] (0xc000acaf20) Reply frame received for 3
I0216 11:56:07.845057       9 log.go:172] (0xc000acaf20) (0xc0024c37c0) Create stream
I0216 11:56:07.845074       9 log.go:172] (0xc000acaf20) (0xc0024c37c0) Stream added, broadcasting: 5
I0216 11:56:07.847520       9 log.go:172] (0xc000acaf20) Reply frame received for 5
I0216 11:56:08.064664       9 log.go:172] (0xc000acaf20) Data frame received for 3
I0216 11:56:08.064955       9 log.go:172] (0xc0024c3720) (3) Data frame handling
I0216 11:56:08.065025       9 log.go:172] (0xc0024c3720) (3) Data frame sent
I0216 11:56:08.228275       9 log.go:172] (0xc000acaf20) Data frame received for 1
I0216 11:56:08.228457       9 log.go:172] (0xc0022255e0) (1) Data frame handling
I0216 11:56:08.228520       9 log.go:172] (0xc0022255e0) (1) Data frame sent
I0216 11:56:08.228955       9 log.go:172] (0xc000acaf20) (0xc0022255e0) Stream removed, broadcasting: 1
I0216 11:56:08.229744       9 log.go:172] (0xc000acaf20) (0xc0024c3720) Stream removed, broadcasting: 3
I0216 11:56:08.229823       9 log.go:172] (0xc000acaf20) (0xc0024c37c0) Stream removed, broadcasting: 5
I0216 11:56:08.229916       9 log.go:172] (0xc000acaf20) (0xc0022255e0) Stream removed, broadcasting: 1
I0216 11:56:08.229938       9 log.go:172] (0xc000acaf20) (0xc0024c3720) Stream removed, broadcasting: 3
I0216 11:56:08.229958       9 log.go:172] (0xc000acaf20) (0xc0024c37c0) Stream removed, broadcasting: 5
I0216 11:56:08.230052       9 log.go:172] (0xc000acaf20) Go away received
Feb 16 11:56:08.230: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:56:08.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-4bzfx" for this suite.
Feb 16 11:56:32.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:56:32.696: INFO: namespace: e2e-tests-pod-network-test-4bzfx, resource: bindings, ignored listing per whitelist
Feb 16 11:56:32.906: INFO: namespace e2e-tests-pod-network-test-4bzfx deletion completed in 24.632670474s

• [SLOW TEST:61.765 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:56:32.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb 16 11:56:45.810: INFO: Successfully updated pod "annotationupdate6034535f-50b3-11ea-aa00-0242ac110008"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:56:47.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-smh7j" for this suite.
Feb 16 11:57:12.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:57:12.159: INFO: namespace: e2e-tests-downward-api-smh7j, resource: bindings, ignored listing per whitelist
Feb 16 11:57:12.283: INFO: namespace e2e-tests-downward-api-smh7j deletion completed in 24.314623661s

• [SLOW TEST:39.377 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:57:12.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 16 11:57:12.520: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 51.321922ms)
Feb 16 11:57:12.565: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 44.811968ms)
Feb 16 11:57:12.578: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.016514ms)
Feb 16 11:57:12.592: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.24389ms)
Feb 16 11:57:12.599: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.882997ms)
Feb 16 11:57:12.608: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.247657ms)
Feb 16 11:57:12.613: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.87823ms)
Feb 16 11:57:12.618: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.542385ms)
Feb 16 11:57:12.622: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.881629ms)
Feb 16 11:57:12.626: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.852149ms)
Feb 16 11:57:12.630: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.312358ms)
Feb 16 11:57:12.635: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.116252ms)
Feb 16 11:57:12.639: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.681967ms)
Feb 16 11:57:12.644: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.177373ms)
Feb 16 11:57:12.649: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.717697ms)
Feb 16 11:57:12.653: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.522816ms)
Feb 16 11:57:12.658: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.714822ms)
Feb 16 11:57:12.662: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.215896ms)
Feb 16 11:57:12.668: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.306958ms)
Feb 16 11:57:12.671: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.882799ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:57:12.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-d4fqd" for this suite.
Feb 16 11:57:18.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:57:18.769: INFO: namespace: e2e-tests-proxy-d4fqd, resource: bindings, ignored listing per whitelist
Feb 16 11:57:18.912: INFO: namespace e2e-tests-proxy-d4fqd deletion completed in 6.236034781s

• [SLOW TEST:6.629 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:57:18.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb 16 11:57:19.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qhqcv'
Feb 16 11:57:21.493: INFO: stderr: ""
Feb 16 11:57:21.493: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 16 11:57:21.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qhqcv'
Feb 16 11:57:21.901: INFO: stderr: ""
Feb 16 11:57:21.901: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Feb 16 11:57:26.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qhqcv'
Feb 16 11:57:27.070: INFO: stderr: ""
Feb 16 11:57:27.070: INFO: stdout: "update-demo-nautilus-fjvs9 update-demo-nautilus-qtd5h "
Feb 16 11:57:27.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fjvs9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qhqcv'
Feb 16 11:57:27.189: INFO: stderr: ""
Feb 16 11:57:27.189: INFO: stdout: ""
Feb 16 11:57:27.189: INFO: update-demo-nautilus-fjvs9 is created but not running
Feb 16 11:57:32.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qhqcv'
Feb 16 11:57:32.326: INFO: stderr: ""
Feb 16 11:57:32.326: INFO: stdout: "update-demo-nautilus-fjvs9 update-demo-nautilus-qtd5h "
Feb 16 11:57:32.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fjvs9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qhqcv'
Feb 16 11:57:32.491: INFO: stderr: ""
Feb 16 11:57:32.492: INFO: stdout: ""
Feb 16 11:57:32.492: INFO: update-demo-nautilus-fjvs9 is created but not running
Feb 16 11:57:37.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qhqcv'
Feb 16 11:57:37.699: INFO: stderr: ""
Feb 16 11:57:37.700: INFO: stdout: "update-demo-nautilus-fjvs9 update-demo-nautilus-qtd5h "
Feb 16 11:57:37.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fjvs9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qhqcv'
Feb 16 11:57:37.881: INFO: stderr: ""
Feb 16 11:57:37.881: INFO: stdout: "true"
Feb 16 11:57:37.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fjvs9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qhqcv'
Feb 16 11:57:38.030: INFO: stderr: ""
Feb 16 11:57:38.031: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 16 11:57:38.031: INFO: validating pod update-demo-nautilus-fjvs9
Feb 16 11:57:38.042: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 16 11:57:38.042: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 16 11:57:38.042: INFO: update-demo-nautilus-fjvs9 is verified up and running
Feb 16 11:57:38.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qtd5h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qhqcv'
Feb 16 11:57:38.136: INFO: stderr: ""
Feb 16 11:57:38.136: INFO: stdout: "true"
Feb 16 11:57:38.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qtd5h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qhqcv'
Feb 16 11:57:38.266: INFO: stderr: ""
Feb 16 11:57:38.266: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 16 11:57:38.266: INFO: validating pod update-demo-nautilus-qtd5h
Feb 16 11:57:38.287: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 16 11:57:38.287: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 16 11:57:38.287: INFO: update-demo-nautilus-qtd5h is verified up and running
STEP: using delete to clean up resources
Feb 16 11:57:38.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qhqcv'
Feb 16 11:57:38.448: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 16 11:57:38.448: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 16 11:57:38.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-qhqcv'
Feb 16 11:57:38.706: INFO: stderr: "No resources found.\n"
Feb 16 11:57:38.706: INFO: stdout: ""
Feb 16 11:57:38.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-qhqcv -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 16 11:57:38.840: INFO: stderr: ""
Feb 16 11:57:38.840: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:57:38.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qhqcv" for this suite.
Feb 16 11:58:03.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:58:03.346: INFO: namespace: e2e-tests-kubectl-qhqcv, resource: bindings, ignored listing per whitelist
Feb 16 11:58:03.389: INFO: namespace e2e-tests-kubectl-qhqcv deletion completed in 24.536562327s

• [SLOW TEST:44.477 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:58:03.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-tb4pd
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 16 11:58:03.569: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 16 11:58:39.891: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-tb4pd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 11:58:39.891: INFO: >>> kubeConfig: /root/.kube/config
I0216 11:58:40.085255       9 log.go:172] (0xc000acaf20) (0xc002158820) Create stream
I0216 11:58:40.085620       9 log.go:172] (0xc000acaf20) (0xc002158820) Stream added, broadcasting: 1
I0216 11:58:40.095470       9 log.go:172] (0xc000acaf20) Reply frame received for 1
I0216 11:58:40.095534       9 log.go:172] (0xc000acaf20) (0xc001ffc000) Create stream
I0216 11:58:40.095545       9 log.go:172] (0xc000acaf20) (0xc001ffc000) Stream added, broadcasting: 3
I0216 11:58:40.096907       9 log.go:172] (0xc000acaf20) Reply frame received for 3
I0216 11:58:40.097114       9 log.go:172] (0xc000acaf20) (0xc0021588c0) Create stream
I0216 11:58:40.097146       9 log.go:172] (0xc000acaf20) (0xc0021588c0) Stream added, broadcasting: 5
I0216 11:58:40.098385       9 log.go:172] (0xc000acaf20) Reply frame received for 5
I0216 11:58:41.267222       9 log.go:172] (0xc000acaf20) Data frame received for 3
I0216 11:58:41.267318       9 log.go:172] (0xc001ffc000) (3) Data frame handling
I0216 11:58:41.267343       9 log.go:172] (0xc001ffc000) (3) Data frame sent
I0216 11:58:41.466772       9 log.go:172] (0xc000acaf20) Data frame received for 1
I0216 11:58:41.466894       9 log.go:172] (0xc000acaf20) (0xc0021588c0) Stream removed, broadcasting: 5
I0216 11:58:41.466987       9 log.go:172] (0xc002158820) (1) Data frame handling
I0216 11:58:41.467031       9 log.go:172] (0xc000acaf20) (0xc001ffc000) Stream removed, broadcasting: 3
I0216 11:58:41.467106       9 log.go:172] (0xc002158820) (1) Data frame sent
I0216 11:58:41.467149       9 log.go:172] (0xc000acaf20) (0xc002158820) Stream removed, broadcasting: 1
I0216 11:58:41.467190       9 log.go:172] (0xc000acaf20) Go away received
I0216 11:58:41.467617       9 log.go:172] (0xc000acaf20) (0xc002158820) Stream removed, broadcasting: 1
I0216 11:58:41.467650       9 log.go:172] (0xc000acaf20) (0xc001ffc000) Stream removed, broadcasting: 3
I0216 11:58:41.467681       9 log.go:172] (0xc000acaf20) (0xc0021588c0) Stream removed, broadcasting: 5
Feb 16 11:58:41.467: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:58:41.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-tb4pd" for this suite.
Feb 16 11:59:07.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:59:07.602: INFO: namespace: e2e-tests-pod-network-test-tb4pd, resource: bindings, ignored listing per whitelist
Feb 16 11:59:07.896: INFO: namespace e2e-tests-pod-network-test-tb4pd deletion completed in 26.389139389s

• [SLOW TEST:64.506 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:59:07.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 16 11:59:15.915: INFO: 10 pods remaining
Feb 16 11:59:15.915: INFO: 9 pods has nil DeletionTimestamp
Feb 16 11:59:15.915: INFO: 
Feb 16 11:59:18.802: INFO: 0 pods remaining
Feb 16 11:59:18.802: INFO: 0 pods has nil DeletionTimestamp
Feb 16 11:59:18.802: INFO: 
Feb 16 11:59:19.467: INFO: 0 pods remaining
Feb 16 11:59:19.467: INFO: 0 pods has nil DeletionTimestamp
Feb 16 11:59:19.467: INFO: 
STEP: Gathering metrics
W0216 11:59:20.267662       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 16 11:59:20.267: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:59:20.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-l9dq6" for this suite.
Feb 16 11:59:34.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 11:59:34.594: INFO: namespace: e2e-tests-gc-l9dq6, resource: bindings, ignored listing per whitelist
Feb 16 11:59:34.642: INFO: namespace e2e-tests-gc-l9dq6 deletion completed in 14.367795127s

• [SLOW TEST:26.746 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 11:59:34.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 16 11:59:48.032: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 11:59:49.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-4cwmf" for this suite.
Feb 16 12:00:27.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:00:28.031: INFO: namespace: e2e-tests-replicaset-4cwmf, resource: bindings, ignored listing per whitelist
Feb 16 12:00:28.081: INFO: namespace e2e-tests-replicaset-4cwmf deletion completed in 38.974718123s

• [SLOW TEST:53.439 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:00:28.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 16 12:00:28.238: INFO: PodSpec: initContainers in spec.initContainers
Feb 16 12:01:42.768: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ec51fbdd-50b3-11ea-aa00-0242ac110008", GenerateName:"", Namespace:"e2e-tests-init-container-tw2w5", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-tw2w5/pods/pod-init-ec51fbdd-50b3-11ea-aa00-0242ac110008", UID:"ec531885-50b3-11ea-a994-fa163e34d433", ResourceVersion:"21863772", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717451228, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"238591441"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-pzqzr", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0015913c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pzqzr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pzqzr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pzqzr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0008d4898), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001b1ea20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0008d49c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0008d4a10)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0008d4a18), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0008d4a1c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717451228, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717451228, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717451228, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717451228, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0012deea0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c0e3f0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c0e460)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://4dabd5a907d500d34c1d522190b87c69ef9a9b7f028cb88e0ead5df74ad2a29a"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0012deee0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0012deec0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:01:42.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-tw2w5" for this suite.
Feb 16 12:02:06.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:02:07.034: INFO: namespace: e2e-tests-init-container-tw2w5, resource: bindings, ignored listing per whitelist
Feb 16 12:02:07.107: INFO: namespace e2e-tests-init-container-tw2w5 deletion completed in 24.271908718s

• [SLOW TEST:99.026 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:02:07.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0216 12:02:38.114045       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 16 12:02:38.114: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:02:38.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-kgb9b" for this suite.
Feb 16 12:02:48.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:02:49.694: INFO: namespace: e2e-tests-gc-kgb9b, resource: bindings, ignored listing per whitelist
Feb 16 12:02:49.698: INFO: namespace e2e-tests-gc-kgb9b deletion completed in 11.5783612s

• [SLOW TEST:42.591 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:02:49.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 16 12:02:50.100: INFO: Waiting up to 5m0s for pod "downward-api-40c978f6-50b4-11ea-aa00-0242ac110008" in namespace "e2e-tests-downward-api-zcsb4" to be "success or failure"
Feb 16 12:02:50.126: INFO: Pod "downward-api-40c978f6-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 26.041145ms
Feb 16 12:02:52.153: INFO: Pod "downward-api-40c978f6-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052801525s
Feb 16 12:02:54.199: INFO: Pod "downward-api-40c978f6-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099325797s
Feb 16 12:02:56.215: INFO: Pod "downward-api-40c978f6-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115325308s
Feb 16 12:02:58.233: INFO: Pod "downward-api-40c978f6-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.132586132s
Feb 16 12:03:00.251: INFO: Pod "downward-api-40c978f6-50b4-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.150555796s
STEP: Saw pod success
Feb 16 12:03:00.251: INFO: Pod "downward-api-40c978f6-50b4-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:03:00.259: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-40c978f6-50b4-11ea-aa00-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 16 12:03:00.428: INFO: Waiting for pod downward-api-40c978f6-50b4-11ea-aa00-0242ac110008 to disappear
Feb 16 12:03:00.511: INFO: Pod downward-api-40c978f6-50b4-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:03:00.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zcsb4" for this suite.
Feb 16 12:03:06.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:03:06.781: INFO: namespace: e2e-tests-downward-api-zcsb4, resource: bindings, ignored listing per whitelist
Feb 16 12:03:06.786: INFO: namespace e2e-tests-downward-api-zcsb4 deletion completed in 6.259015275s

• [SLOW TEST:17.087 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:03:06.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 16 12:03:06.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-g6t25'
Feb 16 12:03:07.063: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 16 12:03:07.064: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Feb 16 12:03:09.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-g6t25'
Feb 16 12:03:09.461: INFO: stderr: ""
Feb 16 12:03:09.462: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:03:09.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-g6t25" for this suite.
Feb 16 12:03:15.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:03:15.776: INFO: namespace: e2e-tests-kubectl-g6t25, resource: bindings, ignored listing per whitelist
Feb 16 12:03:15.900: INFO: namespace e2e-tests-kubectl-g6t25 deletion completed in 6.237818467s

• [SLOW TEST:9.114 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:03:15.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 16 12:03:16.090: INFO: Waiting up to 5m0s for pod "downwardapi-volume-505c8211-50b4-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-v96d9" to be "success or failure"
Feb 16 12:03:16.100: INFO: Pod "downwardapi-volume-505c8211-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.916862ms
Feb 16 12:03:18.266: INFO: Pod "downwardapi-volume-505c8211-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175208659s
Feb 16 12:03:20.283: INFO: Pod "downwardapi-volume-505c8211-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192477378s
Feb 16 12:03:22.323: INFO: Pod "downwardapi-volume-505c8211-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.233020962s
Feb 16 12:03:24.368: INFO: Pod "downwardapi-volume-505c8211-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.277582291s
Feb 16 12:03:26.420: INFO: Pod "downwardapi-volume-505c8211-50b4-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.329374947s
STEP: Saw pod success
Feb 16 12:03:26.420: INFO: Pod "downwardapi-volume-505c8211-50b4-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:03:26.812: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-505c8211-50b4-11ea-aa00-0242ac110008 container client-container: 
STEP: delete the pod
Feb 16 12:03:27.089: INFO: Waiting for pod downwardapi-volume-505c8211-50b4-11ea-aa00-0242ac110008 to disappear
Feb 16 12:03:27.095: INFO: Pod downwardapi-volume-505c8211-50b4-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:03:27.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v96d9" for this suite.
Feb 16 12:03:33.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:03:33.195: INFO: namespace: e2e-tests-projected-v96d9, resource: bindings, ignored listing per whitelist
Feb 16 12:03:33.316: INFO: namespace e2e-tests-projected-v96d9 deletion completed in 6.210918208s

• [SLOW TEST:17.416 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:03:33.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-5ac6a8c0-50b4-11ea-aa00-0242ac110008
STEP: Creating secret with name s-test-opt-upd-5ac6aae2-50b4-11ea-aa00-0242ac110008
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5ac6a8c0-50b4-11ea-aa00-0242ac110008
STEP: Updating secret s-test-opt-upd-5ac6aae2-50b4-11ea-aa00-0242ac110008
STEP: Creating secret with name s-test-opt-create-5ac6ab40-50b4-11ea-aa00-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:05:02.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-dfs9r" for this suite.
Feb 16 12:05:26.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:05:26.432: INFO: namespace: e2e-tests-secrets-dfs9r, resource: bindings, ignored listing per whitelist
Feb 16 12:05:26.524: INFO: namespace e2e-tests-secrets-dfs9r deletion completed in 24.261154988s

• [SLOW TEST:113.207 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:05:26.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Feb 16 12:05:26.942: INFO: Waiting up to 5m0s for pod "var-expansion-9e4d3db9-50b4-11ea-aa00-0242ac110008" in namespace "e2e-tests-var-expansion-fv2jb" to be "success or failure"
Feb 16 12:05:26.960: INFO: Pod "var-expansion-9e4d3db9-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 17.185727ms
Feb 16 12:05:29.136: INFO: Pod "var-expansion-9e4d3db9-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19355684s
Feb 16 12:05:31.156: INFO: Pod "var-expansion-9e4d3db9-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21368035s
Feb 16 12:05:33.701: INFO: Pod "var-expansion-9e4d3db9-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.758357225s
Feb 16 12:05:35.938: INFO: Pod "var-expansion-9e4d3db9-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.995533447s
Feb 16 12:05:37.956: INFO: Pod "var-expansion-9e4d3db9-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.013057811s
Feb 16 12:05:39.972: INFO: Pod "var-expansion-9e4d3db9-50b4-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.028801502s
STEP: Saw pod success
Feb 16 12:05:39.972: INFO: Pod "var-expansion-9e4d3db9-50b4-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:05:39.978: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-9e4d3db9-50b4-11ea-aa00-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 16 12:05:40.092: INFO: Waiting for pod var-expansion-9e4d3db9-50b4-11ea-aa00-0242ac110008 to disappear
Feb 16 12:05:40.171: INFO: Pod var-expansion-9e4d3db9-50b4-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:05:40.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-fv2jb" for this suite.
Feb 16 12:05:46.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:05:46.504: INFO: namespace: e2e-tests-var-expansion-fv2jb, resource: bindings, ignored listing per whitelist
Feb 16 12:05:46.524: INFO: namespace e2e-tests-var-expansion-fv2jb deletion completed in 6.340024443s

• [SLOW TEST:19.999 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:05:46.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-aa275513-50b4-11ea-aa00-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 16 12:05:46.740: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aa282258-50b4-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-lbnx9" to be "success or failure"
Feb 16 12:05:46.749: INFO: Pod "pod-projected-configmaps-aa282258-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.368819ms
Feb 16 12:05:49.036: INFO: Pod "pod-projected-configmaps-aa282258-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.29638377s
Feb 16 12:05:51.083: INFO: Pod "pod-projected-configmaps-aa282258-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.343003436s
Feb 16 12:05:53.277: INFO: Pod "pod-projected-configmaps-aa282258-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.537420615s
Feb 16 12:05:55.309: INFO: Pod "pod-projected-configmaps-aa282258-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.569294931s
Feb 16 12:05:57.538: INFO: Pod "pod-projected-configmaps-aa282258-50b4-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.798522568s
Feb 16 12:05:59.553: INFO: Pod "pod-projected-configmaps-aa282258-50b4-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.813469211s
STEP: Saw pod success
Feb 16 12:05:59.553: INFO: Pod "pod-projected-configmaps-aa282258-50b4-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:05:59.558: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-aa282258-50b4-11ea-aa00-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 16 12:06:00.658: INFO: Waiting for pod pod-projected-configmaps-aa282258-50b4-11ea-aa00-0242ac110008 to disappear
Feb 16 12:06:00.845: INFO: Pod pod-projected-configmaps-aa282258-50b4-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:06:00.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lbnx9" for this suite.
Feb 16 12:06:06.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:06:06.995: INFO: namespace: e2e-tests-projected-lbnx9, resource: bindings, ignored listing per whitelist
Feb 16 12:06:07.022: INFO: namespace e2e-tests-projected-lbnx9 deletion completed in 6.165346497s

• [SLOW TEST:20.496 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:06:07.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 16 12:06:07.253: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 16 12:06:07.281: INFO: Number of nodes with available pods: 0
Feb 16 12:06:07.281: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:06:08.317: INFO: Number of nodes with available pods: 0
Feb 16 12:06:08.317: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:06:10.889: INFO: Number of nodes with available pods: 0
Feb 16 12:06:10.890: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:06:11.923: INFO: Number of nodes with available pods: 0
Feb 16 12:06:11.923: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:06:12.299: INFO: Number of nodes with available pods: 0
Feb 16 12:06:12.299: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:06:13.301: INFO: Number of nodes with available pods: 0
Feb 16 12:06:13.301: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:06:15.151: INFO: Number of nodes with available pods: 0
Feb 16 12:06:15.151: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:06:15.867: INFO: Number of nodes with available pods: 0
Feb 16 12:06:15.867: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:06:16.550: INFO: Number of nodes with available pods: 0
Feb 16 12:06:16.550: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:06:17.298: INFO: Number of nodes with available pods: 0
Feb 16 12:06:17.298: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:06:18.303: INFO: Number of nodes with available pods: 0
Feb 16 12:06:18.304: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:06:19.298: INFO: Number of nodes with available pods: 1
Feb 16 12:06:19.298: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 16 12:06:19.572: INFO: Wrong image for pod: daemon-set-w7wqq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 12:06:20.618: INFO: Wrong image for pod: daemon-set-w7wqq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 12:06:21.646: INFO: Wrong image for pod: daemon-set-w7wqq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 12:06:22.618: INFO: Wrong image for pod: daemon-set-w7wqq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 12:06:23.625: INFO: Wrong image for pod: daemon-set-w7wqq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 12:06:25.504: INFO: Wrong image for pod: daemon-set-w7wqq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 12:06:26.268: INFO: Wrong image for pod: daemon-set-w7wqq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 12:06:26.602: INFO: Wrong image for pod: daemon-set-w7wqq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 12:06:27.603: INFO: Wrong image for pod: daemon-set-w7wqq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 12:06:27.603: INFO: Pod daemon-set-w7wqq is not available
Feb 16 12:06:28.624: INFO: Wrong image for pod: daemon-set-w7wqq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 12:06:28.624: INFO: Pod daemon-set-w7wqq is not available
Feb 16 12:06:29.603: INFO: Wrong image for pod: daemon-set-w7wqq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 12:06:29.603: INFO: Pod daemon-set-w7wqq is not available
Feb 16 12:06:30.640: INFO: Wrong image for pod: daemon-set-w7wqq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 12:06:30.640: INFO: Pod daemon-set-w7wqq is not available
Feb 16 12:06:31.609: INFO: Wrong image for pod: daemon-set-w7wqq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 12:06:31.609: INFO: Pod daemon-set-w7wqq is not available
Feb 16 12:06:32.612: INFO: Wrong image for pod: daemon-set-w7wqq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 16 12:06:32.612: INFO: Pod daemon-set-w7wqq is not available
Feb 16 12:06:33.611: INFO: Pod daemon-set-xv49q is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 16 12:06:33.648: INFO: Number of nodes with available pods: 0
Feb 16 12:06:33.648: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:06:34.717: INFO: Number of nodes with available pods: 0
Feb 16 12:06:34.717: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:06:35.688: INFO: Number of nodes with available pods: 0
Feb 16 12:06:35.688: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:06:36.697: INFO: Number of nodes with available pods: 0
Feb 16 12:06:36.697: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:06:38.021: INFO: Number of nodes with available pods: 0
Feb 16 12:06:38.022: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:06:38.672: INFO: Number of nodes with available pods: 0
Feb 16 12:06:38.672: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:06:39.670: INFO: Number of nodes with available pods: 0
Feb 16 12:06:39.670: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:06:40.685: INFO: Number of nodes with available pods: 0
Feb 16 12:06:40.685: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:06:41.678: INFO: Number of nodes with available pods: 1
Feb 16 12:06:41.678: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-vj45j, will wait for the garbage collector to delete the pods
Feb 16 12:06:42.121: INFO: Deleting DaemonSet.extensions daemon-set took: 49.995439ms
Feb 16 12:06:42.222: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.513357ms
Feb 16 12:06:52.830: INFO: Number of nodes with available pods: 0
Feb 16 12:06:52.830: INFO: Number of running nodes: 0, number of available pods: 0
Feb 16 12:06:52.834: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-vj45j/daemonsets","resourceVersion":"21864425"},"items":null}

Feb 16 12:06:52.838: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-vj45j/pods","resourceVersion":"21864425"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:06:52.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-vj45j" for this suite.
Feb 16 12:06:58.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:06:59.139: INFO: namespace: e2e-tests-daemonsets-vj45j, resource: bindings, ignored listing per whitelist
Feb 16 12:06:59.142: INFO: namespace e2e-tests-daemonsets-vj45j deletion completed in 6.283262684s

• [SLOW TEST:52.120 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:06:59.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:07:12.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-9z4fq" for this suite.
Feb 16 12:07:36.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:07:36.730: INFO: namespace: e2e-tests-replication-controller-9z4fq, resource: bindings, ignored listing per whitelist
Feb 16 12:07:36.814: INFO: namespace e2e-tests-replication-controller-9z4fq deletion completed in 24.328425073s

• [SLOW TEST:37.672 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:07:36.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 16 12:07:37.918: INFO: Pod name wrapped-volume-race-ec5938a5-50b4-11ea-aa00-0242ac110008: Found 0 pods out of 5
Feb 16 12:07:42.954: INFO: Pod name wrapped-volume-race-ec5938a5-50b4-11ea-aa00-0242ac110008: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-ec5938a5-50b4-11ea-aa00-0242ac110008 in namespace e2e-tests-emptydir-wrapper-v9ghd, will wait for the garbage collector to delete the pods
Feb 16 12:09:57.219: INFO: Deleting ReplicationController wrapped-volume-race-ec5938a5-50b4-11ea-aa00-0242ac110008 took: 18.65788ms
Feb 16 12:09:57.520: INFO: Terminating ReplicationController wrapped-volume-race-ec5938a5-50b4-11ea-aa00-0242ac110008 pods took: 300.934204ms
STEP: Creating RC which spawns configmap-volume pods
Feb 16 12:10:44.076: INFO: Pod name wrapped-volume-race-5b4bca10-50b5-11ea-aa00-0242ac110008: Found 0 pods out of 5
Feb 16 12:10:49.161: INFO: Pod name wrapped-volume-race-5b4bca10-50b5-11ea-aa00-0242ac110008: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-5b4bca10-50b5-11ea-aa00-0242ac110008 in namespace e2e-tests-emptydir-wrapper-v9ghd, will wait for the garbage collector to delete the pods
Feb 16 12:12:55.328: INFO: Deleting ReplicationController wrapped-volume-race-5b4bca10-50b5-11ea-aa00-0242ac110008 took: 22.364687ms
Feb 16 12:12:55.629: INFO: Terminating ReplicationController wrapped-volume-race-5b4bca10-50b5-11ea-aa00-0242ac110008 pods took: 301.041694ms
STEP: Creating RC which spawns configmap-volume pods
Feb 16 12:13:40.441: INFO: Pod name wrapped-volume-race-c471b62a-50b5-11ea-aa00-0242ac110008: Found 0 pods out of 5
Feb 16 12:13:45.470: INFO: Pod name wrapped-volume-race-c471b62a-50b5-11ea-aa00-0242ac110008: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c471b62a-50b5-11ea-aa00-0242ac110008 in namespace e2e-tests-emptydir-wrapper-v9ghd, will wait for the garbage collector to delete the pods
Feb 16 12:15:29.680: INFO: Deleting ReplicationController wrapped-volume-race-c471b62a-50b5-11ea-aa00-0242ac110008 took: 73.596631ms
Feb 16 12:15:29.981: INFO: Terminating ReplicationController wrapped-volume-race-c471b62a-50b5-11ea-aa00-0242ac110008 pods took: 301.014077ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:16:15.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-v9ghd" for this suite.
Feb 16 12:16:26.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:16:26.097: INFO: namespace: e2e-tests-emptydir-wrapper-v9ghd, resource: bindings, ignored listing per whitelist
Feb 16 12:16:26.168: INFO: namespace e2e-tests-emptydir-wrapper-v9ghd deletion completed in 10.214796303s

• [SLOW TEST:529.353 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:16:26.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 16 12:16:26.454: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 16 12:16:26.630: INFO: Number of nodes with available pods: 0
Feb 16 12:16:26.630: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 16 12:16:26.759: INFO: Number of nodes with available pods: 0
Feb 16 12:16:26.759: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:29.418: INFO: Number of nodes with available pods: 0
Feb 16 12:16:29.419: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:30.184: INFO: Number of nodes with available pods: 0
Feb 16 12:16:30.184: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:31.782: INFO: Number of nodes with available pods: 0
Feb 16 12:16:31.783: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:33.053: INFO: Number of nodes with available pods: 0
Feb 16 12:16:33.053: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:33.791: INFO: Number of nodes with available pods: 0
Feb 16 12:16:33.791: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:34.775: INFO: Number of nodes with available pods: 0
Feb 16 12:16:34.775: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:36.650: INFO: Number of nodes with available pods: 0
Feb 16 12:16:36.650: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:36.880: INFO: Number of nodes with available pods: 0
Feb 16 12:16:36.880: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:37.773: INFO: Number of nodes with available pods: 0
Feb 16 12:16:37.773: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:38.769: INFO: Number of nodes with available pods: 0
Feb 16 12:16:38.769: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:39.782: INFO: Number of nodes with available pods: 0
Feb 16 12:16:39.782: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:40.779: INFO: Number of nodes with available pods: 1
Feb 16 12:16:40.780: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 16 12:16:40.955: INFO: Number of nodes with available pods: 1
Feb 16 12:16:40.956: INFO: Number of running nodes: 0, number of available pods: 1
Feb 16 12:16:41.999: INFO: Number of nodes with available pods: 0
Feb 16 12:16:41.999: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 16 12:16:42.171: INFO: Number of nodes with available pods: 0
Feb 16 12:16:42.171: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:43.553: INFO: Number of nodes with available pods: 0
Feb 16 12:16:43.553: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:45.936: INFO: Number of nodes with available pods: 0
Feb 16 12:16:45.936: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:46.202: INFO: Number of nodes with available pods: 0
Feb 16 12:16:46.202: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:47.194: INFO: Number of nodes with available pods: 0
Feb 16 12:16:47.194: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:48.193: INFO: Number of nodes with available pods: 0
Feb 16 12:16:48.193: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:49.216: INFO: Number of nodes with available pods: 0
Feb 16 12:16:49.216: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:50.195: INFO: Number of nodes with available pods: 0
Feb 16 12:16:50.195: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:51.190: INFO: Number of nodes with available pods: 0
Feb 16 12:16:51.190: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:52.188: INFO: Number of nodes with available pods: 0
Feb 16 12:16:52.189: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:53.209: INFO: Number of nodes with available pods: 0
Feb 16 12:16:53.209: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:54.286: INFO: Number of nodes with available pods: 0
Feb 16 12:16:54.286: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:55.459: INFO: Number of nodes with available pods: 0
Feb 16 12:16:55.459: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:56.347: INFO: Number of nodes with available pods: 0
Feb 16 12:16:56.347: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:57.181: INFO: Number of nodes with available pods: 0
Feb 16 12:16:57.181: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:16:58.196: INFO: Number of nodes with available pods: 0
Feb 16 12:16:58.196: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:17:00.014: INFO: Number of nodes with available pods: 0
Feb 16 12:17:00.014: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:17:00.329: INFO: Number of nodes with available pods: 0
Feb 16 12:17:00.329: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:17:01.320: INFO: Number of nodes with available pods: 0
Feb 16 12:17:01.320: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:17:02.207: INFO: Number of nodes with available pods: 0
Feb 16 12:17:02.207: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:17:03.192: INFO: Number of nodes with available pods: 0
Feb 16 12:17:03.192: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 16 12:17:04.194: INFO: Number of nodes with available pods: 1
Feb 16 12:17:04.194: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-rnxrn, will wait for the garbage collector to delete the pods
Feb 16 12:17:04.304: INFO: Deleting DaemonSet.extensions daemon-set took: 27.131156ms
Feb 16 12:17:04.605: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.668149ms
Feb 16 12:17:11.342: INFO: Number of nodes with available pods: 0
Feb 16 12:17:11.342: INFO: Number of running nodes: 0, number of available pods: 0
Feb 16 12:17:11.371: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-rnxrn/daemonsets","resourceVersion":"21865669"},"items":null}

Feb 16 12:17:11.382: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-rnxrn/pods","resourceVersion":"21865669"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:17:11.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-rnxrn" for this suite.
Feb 16 12:17:17.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:17:17.647: INFO: namespace: e2e-tests-daemonsets-rnxrn, resource: bindings, ignored listing per whitelist
Feb 16 12:17:17.794: INFO: namespace e2e-tests-daemonsets-rnxrn deletion completed in 6.308818289s

• [SLOW TEST:51.626 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:17:17.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 16 12:17:28.072: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-462abb58-50b6-11ea-aa00-0242ac110008,GenerateName:,Namespace:e2e-tests-events-f2sz5,SelfLink:/api/v1/namespaces/e2e-tests-events-f2sz5/pods/send-events-462abb58-50b6-11ea-aa00-0242ac110008,UID:4634d6f4-50b6-11ea-a994-fa163e34d433,ResourceVersion:21865719,Generation:0,CreationTimestamp:2020-02-16 12:17:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 969733393,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h5ttl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h5ttl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-h5ttl true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002407310} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002407330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:17:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:17:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:17:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:17:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-16 12:17:18 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-16 12:17:26 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://f1dcdf6e300c06b91df668f64ccff8e275ba03abfbe5ff8f8f15859d748a41b8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb 16 12:17:30.085: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 16 12:17:32.097: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:17:32.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-f2sz5" for this suite.
Feb 16 12:18:14.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:18:14.502: INFO: namespace: e2e-tests-events-f2sz5, resource: bindings, ignored listing per whitelist
Feb 16 12:18:14.589: INFO: namespace e2e-tests-events-f2sz5 deletion completed in 42.439900139s

• [SLOW TEST:56.795 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:18:14.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-681aa828-50b6-11ea-aa00-0242ac110008
STEP: Creating secret with name secret-projected-all-test-volume-681aa7eb-50b6-11ea-aa00-0242ac110008
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb 16 12:18:14.997: INFO: Waiting up to 5m0s for pod "projected-volume-681aa674-50b6-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-pnwqq" to be "success or failure"
Feb 16 12:18:15.004: INFO: Pod "projected-volume-681aa674-50b6-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.385888ms
Feb 16 12:18:17.113: INFO: Pod "projected-volume-681aa674-50b6-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116524108s
Feb 16 12:18:19.146: INFO: Pod "projected-volume-681aa674-50b6-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148738755s
Feb 16 12:18:21.554: INFO: Pod "projected-volume-681aa674-50b6-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.55702066s
Feb 16 12:18:23.570: INFO: Pod "projected-volume-681aa674-50b6-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.573292299s
Feb 16 12:18:25.590: INFO: Pod "projected-volume-681aa674-50b6-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.593513843s
Feb 16 12:18:27.713: INFO: Pod "projected-volume-681aa674-50b6-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.716049526s
STEP: Saw pod success
Feb 16 12:18:27.713: INFO: Pod "projected-volume-681aa674-50b6-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:18:27.731: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-681aa674-50b6-11ea-aa00-0242ac110008 container projected-all-volume-test: 
STEP: delete the pod
Feb 16 12:18:28.063: INFO: Waiting for pod projected-volume-681aa674-50b6-11ea-aa00-0242ac110008 to disappear
Feb 16 12:18:28.091: INFO: Pod projected-volume-681aa674-50b6-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:18:28.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pnwqq" for this suite.
Feb 16 12:18:36.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:18:36.196: INFO: namespace: e2e-tests-projected-pnwqq, resource: bindings, ignored listing per whitelist
Feb 16 12:18:36.313: INFO: namespace e2e-tests-projected-pnwqq deletion completed in 8.200898978s

• [SLOW TEST:21.724 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:18:36.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb 16 12:18:49.243: INFO: Successfully updated pod "annotationupdate750f1838-50b6-11ea-aa00-0242ac110008"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:18:51.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-565k8" for this suite.
Feb 16 12:19:15.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:19:15.598: INFO: namespace: e2e-tests-projected-565k8, resource: bindings, ignored listing per whitelist
Feb 16 12:19:15.713: INFO: namespace e2e-tests-projected-565k8 deletion completed in 24.303917431s

• [SLOW TEST:39.399 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:19:15.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-vzw6l
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-vzw6l
STEP: Deleting pre-stop pod
Feb 16 12:19:41.292: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:19:41.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-vzw6l" for this suite.
Feb 16 12:20:23.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:20:23.599: INFO: namespace: e2e-tests-prestop-vzw6l, resource: bindings, ignored listing per whitelist
Feb 16 12:20:23.622: INFO: namespace e2e-tests-prestop-vzw6l deletion completed in 42.265784814s

• [SLOW TEST:67.908 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:20:23.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 16 12:20:24.007: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 16 12:20:24.047: INFO: Waiting for terminating namespaces to be deleted...
Feb 16 12:20:24.094: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 16 12:20:24.114: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 16 12:20:24.114: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 16 12:20:24.114: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 16 12:20:24.114: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 16 12:20:24.114: INFO: 	Container coredns ready: true, restart count 0
Feb 16 12:20:24.114: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 16 12:20:24.114: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 16 12:20:24.114: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 16 12:20:24.114: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 16 12:20:24.114: INFO: 	Container weave ready: true, restart count 0
Feb 16 12:20:24.114: INFO: 	Container weave-npc ready: true, restart count 0
Feb 16 12:20:24.114: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 16 12:20:24.114: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Feb 16 12:20:24.280: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 16 12:20:24.280: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 16 12:20:24.280: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb 16 12:20:24.280: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Feb 16 12:20:24.280: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Feb 16 12:20:24.280: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb 16 12:20:24.280: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 16 12:20:24.280: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b5378980-50b6-11ea-aa00-0242ac110008.15f3e14f2dbf99c1], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-nzxwf/filler-pod-b5378980-50b6-11ea-aa00-0242ac110008 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b5378980-50b6-11ea-aa00-0242ac110008.15f3e1503235b22f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b5378980-50b6-11ea-aa00-0242ac110008.15f3e150f0dac34d], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b5378980-50b6-11ea-aa00-0242ac110008.15f3e1512773a943], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f3e1518864ab4f], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:20:36.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-nzxwf" for this suite.
Feb 16 12:20:42.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:20:42.484: INFO: namespace: e2e-tests-sched-pred-nzxwf, resource: bindings, ignored listing per whitelist
Feb 16 12:20:42.634: INFO: namespace e2e-tests-sched-pred-nzxwf deletion completed in 6.582907192s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:19.011 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:20:42.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Feb 16 12:20:43.808: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:20:43.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-78tv9" for this suite.
Feb 16 12:20:50.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:20:50.222: INFO: namespace: e2e-tests-kubectl-78tv9, resource: bindings, ignored listing per whitelist
Feb 16 12:20:50.303: INFO: namespace e2e-tests-kubectl-78tv9 deletion completed in 6.341807718s

• [SLOW TEST:7.666 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:20:50.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-pxjvd/configmap-test-c4e3a756-50b6-11ea-aa00-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 16 12:20:50.604: INFO: Waiting up to 5m0s for pod "pod-configmaps-c4e4d569-50b6-11ea-aa00-0242ac110008" in namespace "e2e-tests-configmap-pxjvd" to be "success or failure"
Feb 16 12:20:50.608: INFO: Pod "pod-configmaps-c4e4d569-50b6-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.636187ms
Feb 16 12:20:52.646: INFO: Pod "pod-configmaps-c4e4d569-50b6-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041439283s
Feb 16 12:20:54.669: INFO: Pod "pod-configmaps-c4e4d569-50b6-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064867383s
Feb 16 12:20:56.877: INFO: Pod "pod-configmaps-c4e4d569-50b6-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.273033812s
Feb 16 12:20:58.895: INFO: Pod "pod-configmaps-c4e4d569-50b6-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.291061994s
Feb 16 12:21:00.915: INFO: Pod "pod-configmaps-c4e4d569-50b6-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.310429243s
Feb 16 12:21:03.315: INFO: Pod "pod-configmaps-c4e4d569-50b6-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.71087863s
STEP: Saw pod success
Feb 16 12:21:03.315: INFO: Pod "pod-configmaps-c4e4d569-50b6-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:21:03.325: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c4e4d569-50b6-11ea-aa00-0242ac110008 container env-test: 
STEP: delete the pod
Feb 16 12:21:03.637: INFO: Waiting for pod pod-configmaps-c4e4d569-50b6-11ea-aa00-0242ac110008 to disappear
Feb 16 12:21:03.752: INFO: Pod pod-configmaps-c4e4d569-50b6-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:21:03.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-pxjvd" for this suite.
Feb 16 12:21:09.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:21:09.845: INFO: namespace: e2e-tests-configmap-pxjvd, resource: bindings, ignored listing per whitelist
Feb 16 12:21:09.983: INFO: namespace e2e-tests-configmap-pxjvd deletion completed in 6.218805379s

• [SLOW TEST:19.679 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:21:09.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-7hjhd
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 16 12:21:10.230: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 16 12:21:46.552: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-7hjhd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 12:21:46.552: INFO: >>> kubeConfig: /root/.kube/config
I0216 12:21:46.700029       9 log.go:172] (0xc000c66420) (0xc000e75c20) Create stream
I0216 12:21:46.700287       9 log.go:172] (0xc000c66420) (0xc000e75c20) Stream added, broadcasting: 1
I0216 12:21:46.722067       9 log.go:172] (0xc000c66420) Reply frame received for 1
I0216 12:21:46.722211       9 log.go:172] (0xc000c66420) (0xc0026fa780) Create stream
I0216 12:21:46.722237       9 log.go:172] (0xc000c66420) (0xc0026fa780) Stream added, broadcasting: 3
I0216 12:21:46.724084       9 log.go:172] (0xc000c66420) Reply frame received for 3
I0216 12:21:46.724147       9 log.go:172] (0xc000c66420) (0xc000709ea0) Create stream
I0216 12:21:46.724176       9 log.go:172] (0xc000c66420) (0xc000709ea0) Stream added, broadcasting: 5
I0216 12:21:46.726326       9 log.go:172] (0xc000c66420) Reply frame received for 5
I0216 12:21:46.922480       9 log.go:172] (0xc000c66420) Data frame received for 3
I0216 12:21:46.922638       9 log.go:172] (0xc0026fa780) (3) Data frame handling
I0216 12:21:46.922664       9 log.go:172] (0xc0026fa780) (3) Data frame sent
I0216 12:21:47.126844       9 log.go:172] (0xc000c66420) Data frame received for 1
I0216 12:21:47.127054       9 log.go:172] (0xc000c66420) (0xc0026fa780) Stream removed, broadcasting: 3
I0216 12:21:47.127146       9 log.go:172] (0xc000e75c20) (1) Data frame handling
I0216 12:21:47.127178       9 log.go:172] (0xc000e75c20) (1) Data frame sent
I0216 12:21:47.127239       9 log.go:172] (0xc000c66420) (0xc000709ea0) Stream removed, broadcasting: 5
I0216 12:21:47.127305       9 log.go:172] (0xc000c66420) (0xc000e75c20) Stream removed, broadcasting: 1
I0216 12:21:47.127324       9 log.go:172] (0xc000c66420) Go away received
I0216 12:21:47.127989       9 log.go:172] (0xc000c66420) (0xc000e75c20) Stream removed, broadcasting: 1
I0216 12:21:47.128008       9 log.go:172] (0xc000c66420) (0xc0026fa780) Stream removed, broadcasting: 3
I0216 12:21:47.128014       9 log.go:172] (0xc000c66420) (0xc000709ea0) Stream removed, broadcasting: 5
Feb 16 12:21:47.128: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:21:47.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-7hjhd" for this suite.
Feb 16 12:22:11.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:22:11.262: INFO: namespace: e2e-tests-pod-network-test-7hjhd, resource: bindings, ignored listing per whitelist
Feb 16 12:22:11.325: INFO: namespace e2e-tests-pod-network-test-7hjhd deletion completed in 24.174843515s

• [SLOW TEST:61.341 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:22:11.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-m2f4m
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-m2f4m
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-m2f4m
Feb 16 12:22:11.851: INFO: Found 0 stateful pods, waiting for 1
Feb 16 12:22:22.053: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 16 12:22:22.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 16 12:22:22.791: INFO: stderr: "I0216 12:22:22.354719    2875 log.go:172] (0xc000706370) (0xc0008b05a0) Create stream\nI0216 12:22:22.354938    2875 log.go:172] (0xc000706370) (0xc0008b05a0) Stream added, broadcasting: 1\nI0216 12:22:22.362206    2875 log.go:172] (0xc000706370) Reply frame received for 1\nI0216 12:22:22.362257    2875 log.go:172] (0xc000706370) (0xc0006a0d20) Create stream\nI0216 12:22:22.362273    2875 log.go:172] (0xc000706370) (0xc0006a0d20) Stream added, broadcasting: 3\nI0216 12:22:22.363295    2875 log.go:172] (0xc000706370) Reply frame received for 3\nI0216 12:22:22.363313    2875 log.go:172] (0xc000706370) (0xc0006a0e60) Create stream\nI0216 12:22:22.363320    2875 log.go:172] (0xc000706370) (0xc0006a0e60) Stream added, broadcasting: 5\nI0216 12:22:22.364379    2875 log.go:172] (0xc000706370) Reply frame received for 5\nI0216 12:22:22.559189    2875 log.go:172] (0xc000706370) Data frame received for 3\nI0216 12:22:22.559623    2875 log.go:172] (0xc0006a0d20) (3) Data frame handling\nI0216 12:22:22.559675    2875 log.go:172] (0xc0006a0d20) (3) Data frame sent\nI0216 12:22:22.768723    2875 log.go:172] (0xc000706370) Data frame received for 1\nI0216 12:22:22.768844    2875 log.go:172] (0xc0008b05a0) (1) Data frame handling\nI0216 12:22:22.768924    2875 log.go:172] (0xc0008b05a0) (1) Data frame sent\nI0216 12:22:22.769718    2875 log.go:172] (0xc000706370) (0xc0008b05a0) Stream removed, broadcasting: 1\nI0216 12:22:22.770383    2875 log.go:172] (0xc000706370) (0xc0006a0d20) Stream removed, broadcasting: 3\nI0216 12:22:22.771327    2875 log.go:172] (0xc000706370) (0xc0006a0e60) Stream removed, broadcasting: 5\nI0216 12:22:22.771434    2875 log.go:172] (0xc000706370) (0xc0008b05a0) Stream removed, broadcasting: 1\nI0216 12:22:22.771520    2875 log.go:172] (0xc000706370) (0xc0006a0d20) Stream removed, broadcasting: 3\nI0216 12:22:22.771565    2875 log.go:172] (0xc000706370) (0xc0006a0e60) Stream removed, broadcasting: 5\n"
Feb 16 12:22:22.792: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 16 12:22:22.792: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 16 12:22:22.829: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 16 12:22:22.829: INFO: Waiting for statefulset status.replicas updated to 0
Feb 16 12:22:22.849: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 16 12:22:32.954: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 16 12:22:32.954: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  }]
Feb 16 12:22:32.954: INFO: 
Feb 16 12:22:32.954: INFO: StatefulSet ss has not reached scale 3, at 1
Feb 16 12:22:34.156: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.980278841s
Feb 16 12:22:35.369: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.778341269s
Feb 16 12:22:36.541: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.56469296s
Feb 16 12:22:37.564: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.393119975s
Feb 16 12:22:38.596: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.370135039s
Feb 16 12:22:40.385: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.338516407s
Feb 16 12:22:41.516: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.548412313s
Feb 16 12:22:42.884: INFO: Verifying statefulset ss doesn't scale past 3 for another 418.521281ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-m2f4m
Feb 16 12:22:43.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:22:45.057: INFO: stderr: "I0216 12:22:44.168393    2897 log.go:172] (0xc000892160) (0xc0006ea5a0) Create stream\nI0216 12:22:44.168532    2897 log.go:172] (0xc000892160) (0xc0006ea5a0) Stream added, broadcasting: 1\nI0216 12:22:44.174986    2897 log.go:172] (0xc000892160) Reply frame received for 1\nI0216 12:22:44.175012    2897 log.go:172] (0xc000892160) (0xc0006ea640) Create stream\nI0216 12:22:44.175017    2897 log.go:172] (0xc000892160) (0xc0006ea640) Stream added, broadcasting: 3\nI0216 12:22:44.177421    2897 log.go:172] (0xc000892160) Reply frame received for 3\nI0216 12:22:44.177443    2897 log.go:172] (0xc000892160) (0xc00059ebe0) Create stream\nI0216 12:22:44.177479    2897 log.go:172] (0xc000892160) (0xc00059ebe0) Stream added, broadcasting: 5\nI0216 12:22:44.178440    2897 log.go:172] (0xc000892160) Reply frame received for 5\nI0216 12:22:44.625617    2897 log.go:172] (0xc000892160) Data frame received for 3\nI0216 12:22:44.625665    2897 log.go:172] (0xc0006ea640) (3) Data frame handling\nI0216 12:22:44.625685    2897 log.go:172] (0xc0006ea640) (3) Data frame sent\nI0216 12:22:45.045972    2897 log.go:172] (0xc000892160) (0xc0006ea640) Stream removed, broadcasting: 3\nI0216 12:22:45.046204    2897 log.go:172] (0xc000892160) Data frame received for 1\nI0216 12:22:45.046252    2897 log.go:172] (0xc000892160) (0xc00059ebe0) Stream removed, broadcasting: 5\nI0216 12:22:45.046277    2897 log.go:172] (0xc0006ea5a0) (1) Data frame handling\nI0216 12:22:45.046296    2897 log.go:172] (0xc0006ea5a0) (1) Data frame sent\nI0216 12:22:45.046306    2897 log.go:172] (0xc000892160) (0xc0006ea5a0) Stream removed, broadcasting: 1\nI0216 12:22:45.046321    2897 log.go:172] (0xc000892160) Go away received\nI0216 12:22:45.046704    2897 log.go:172] (0xc000892160) (0xc0006ea5a0) Stream removed, broadcasting: 1\nI0216 12:22:45.046719    2897 log.go:172] (0xc000892160) (0xc0006ea640) Stream removed, broadcasting: 3\nI0216 12:22:45.046733    2897 log.go:172] (0xc000892160) (0xc00059ebe0) Stream removed, broadcasting: 5\n"
Feb 16 12:22:45.058: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 16 12:22:45.058: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 16 12:22:45.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:22:45.605: INFO: stderr: "I0216 12:22:45.312281    2918 log.go:172] (0xc0001380b0) (0xc00070e000) Create stream\nI0216 12:22:45.312412    2918 log.go:172] (0xc0001380b0) (0xc00070e000) Stream added, broadcasting: 1\nI0216 12:22:45.334951    2918 log.go:172] (0xc0001380b0) Reply frame received for 1\nI0216 12:22:45.335041    2918 log.go:172] (0xc0001380b0) (0xc000416be0) Create stream\nI0216 12:22:45.335066    2918 log.go:172] (0xc0001380b0) (0xc000416be0) Stream added, broadcasting: 3\nI0216 12:22:45.341104    2918 log.go:172] (0xc0001380b0) Reply frame received for 3\nI0216 12:22:45.341156    2918 log.go:172] (0xc0001380b0) (0xc0002fe000) Create stream\nI0216 12:22:45.341172    2918 log.go:172] (0xc0001380b0) (0xc0002fe000) Stream added, broadcasting: 5\nI0216 12:22:45.346996    2918 log.go:172] (0xc0001380b0) Reply frame received for 5\nI0216 12:22:45.447713    2918 log.go:172] (0xc0001380b0) Data frame received for 5\nI0216 12:22:45.447799    2918 log.go:172] (0xc0002fe000) (5) Data frame handling\nI0216 12:22:45.447819    2918 log.go:172] (0xc0002fe000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0216 12:22:45.447847    2918 log.go:172] (0xc0001380b0) Data frame received for 3\nI0216 12:22:45.447892    2918 log.go:172] (0xc000416be0) (3) Data frame handling\nI0216 12:22:45.447917    2918 log.go:172] (0xc000416be0) (3) Data frame sent\nI0216 12:22:45.598766    2918 log.go:172] (0xc0001380b0) (0xc000416be0) Stream removed, broadcasting: 3\nI0216 12:22:45.598916    2918 log.go:172] (0xc0001380b0) (0xc0002fe000) Stream removed, broadcasting: 5\nI0216 12:22:45.598991    2918 log.go:172] (0xc0001380b0) Data frame received for 1\nI0216 12:22:45.598999    2918 log.go:172] (0xc00070e000) (1) Data frame handling\nI0216 12:22:45.599018    2918 log.go:172] (0xc00070e000) (1) Data frame sent\nI0216 12:22:45.599027    2918 log.go:172] (0xc0001380b0) (0xc00070e000) Stream removed, broadcasting: 1\nI0216 12:22:45.599043    2918 log.go:172] (0xc0001380b0) Go away received\nI0216 12:22:45.599302    2918 log.go:172] (0xc0001380b0) (0xc00070e000) Stream removed, broadcasting: 1\nI0216 12:22:45.599315    2918 log.go:172] (0xc0001380b0) (0xc000416be0) Stream removed, broadcasting: 3\nI0216 12:22:45.599324    2918 log.go:172] (0xc0001380b0) (0xc0002fe000) Stream removed, broadcasting: 5\n"
Feb 16 12:22:45.606: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 16 12:22:45.606: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 16 12:22:45.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:22:46.111: INFO: stderr: "I0216 12:22:45.790774    2940 log.go:172] (0xc0006a8370) (0xc0006e8640) Create stream\nI0216 12:22:45.790996    2940 log.go:172] (0xc0006a8370) (0xc0006e8640) Stream added, broadcasting: 1\nI0216 12:22:45.799114    2940 log.go:172] (0xc0006a8370) Reply frame received for 1\nI0216 12:22:45.799148    2940 log.go:172] (0xc0006a8370) (0xc00064cd20) Create stream\nI0216 12:22:45.799156    2940 log.go:172] (0xc0006a8370) (0xc00064cd20) Stream added, broadcasting: 3\nI0216 12:22:45.800420    2940 log.go:172] (0xc0006a8370) Reply frame received for 3\nI0216 12:22:45.800439    2940 log.go:172] (0xc0006a8370) (0xc00064ce60) Create stream\nI0216 12:22:45.800447    2940 log.go:172] (0xc0006a8370) (0xc00064ce60) Stream added, broadcasting: 5\nI0216 12:22:45.801389    2940 log.go:172] (0xc0006a8370) Reply frame received for 5\nI0216 12:22:45.980054    2940 log.go:172] (0xc0006a8370) Data frame received for 3\nI0216 12:22:45.980119    2940 log.go:172] (0xc00064cd20) (3) Data frame handling\nI0216 12:22:45.980139    2940 log.go:172] (0xc00064cd20) (3) Data frame sent\nI0216 12:22:45.980174    2940 log.go:172] (0xc0006a8370) Data frame received for 5\nI0216 12:22:45.980186    2940 log.go:172] (0xc00064ce60) (5) Data frame handling\nI0216 12:22:45.980206    2940 log.go:172] (0xc00064ce60) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0216 12:22:46.103094    2940 log.go:172] (0xc0006a8370) (0xc00064cd20) Stream removed, broadcasting: 3\nI0216 12:22:46.103178    2940 log.go:172] (0xc0006a8370) Data frame received for 1\nI0216 12:22:46.103189    2940 log.go:172] (0xc0006e8640) (1) Data frame handling\nI0216 12:22:46.103203    2940 log.go:172] (0xc0006e8640) (1) Data frame sent\nI0216 12:22:46.103295    2940 log.go:172] (0xc0006a8370) (0xc0006e8640) Stream removed, broadcasting: 1\nI0216 12:22:46.103380    2940 log.go:172] (0xc0006a8370) (0xc00064ce60) Stream removed, broadcasting: 5\nI0216 12:22:46.103437    2940 log.go:172] (0xc0006a8370) (0xc0006e8640) Stream removed, broadcasting: 1\nI0216 12:22:46.103446    2940 log.go:172] (0xc0006a8370) (0xc00064cd20) Stream removed, broadcasting: 3\nI0216 12:22:46.103452    2940 log.go:172] (0xc0006a8370) (0xc00064ce60) Stream removed, broadcasting: 5\n"
Feb 16 12:22:46.112: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 16 12:22:46.112: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 16 12:22:46.126: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 12:22:46.126: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Pending - Ready=false
Feb 16 12:22:56.156: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 12:22:56.156: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 12:22:56.156: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 16 12:22:56.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 16 12:22:56.704: INFO: stderr: "I0216 12:22:56.339364    2961 log.go:172] (0xc000702370) (0xc0006375e0) Create stream\nI0216 12:22:56.339553    2961 log.go:172] (0xc000702370) (0xc0006375e0) Stream added, broadcasting: 1\nI0216 12:22:56.346208    2961 log.go:172] (0xc000702370) Reply frame received for 1\nI0216 12:22:56.346268    2961 log.go:172] (0xc000702370) (0xc0006a4000) Create stream\nI0216 12:22:56.346275    2961 log.go:172] (0xc000702370) (0xc0006a4000) Stream added, broadcasting: 3\nI0216 12:22:56.347617    2961 log.go:172] (0xc000702370) Reply frame received for 3\nI0216 12:22:56.347640    2961 log.go:172] (0xc000702370) (0xc000584000) Create stream\nI0216 12:22:56.347649    2961 log.go:172] (0xc000702370) (0xc000584000) Stream added, broadcasting: 5\nI0216 12:22:56.349055    2961 log.go:172] (0xc000702370) Reply frame received for 5\nI0216 12:22:56.457363    2961 log.go:172] (0xc000702370) Data frame received for 3\nI0216 12:22:56.457468    2961 log.go:172] (0xc0006a4000) (3) Data frame handling\nI0216 12:22:56.457489    2961 log.go:172] (0xc0006a4000) (3) Data frame sent\nI0216 12:22:56.689333    2961 log.go:172] (0xc000702370) Data frame received for 1\nI0216 12:22:56.689412    2961 log.go:172] (0xc0006375e0) (1) Data frame handling\nI0216 12:22:56.689441    2961 log.go:172] (0xc0006375e0) (1) Data frame sent\nI0216 12:22:56.689591    2961 log.go:172] (0xc000702370) (0xc0006375e0) Stream removed, broadcasting: 1\nI0216 12:22:56.689878    2961 log.go:172] (0xc000702370) (0xc0006a4000) Stream removed, broadcasting: 3\nI0216 12:22:56.691674    2961 log.go:172] (0xc000702370) (0xc000584000) Stream removed, broadcasting: 5\nI0216 12:22:56.691733    2961 log.go:172] (0xc000702370) (0xc0006375e0) Stream removed, broadcasting: 1\nI0216 12:22:56.691747    2961 log.go:172] (0xc000702370) (0xc0006a4000) Stream removed, broadcasting: 3\nI0216 12:22:56.691753    2961 log.go:172] (0xc000702370) (0xc000584000) Stream removed, broadcasting: 5\n"
Feb 16 12:22:56.705: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 16 12:22:56.705: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 16 12:22:56.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 16 12:22:57.440: INFO: stderr: "I0216 12:22:57.157647    2983 log.go:172] (0xc00015c840) (0xc0007ce640) Create stream\nI0216 12:22:57.157892    2983 log.go:172] (0xc00015c840) (0xc0007ce640) Stream added, broadcasting: 1\nI0216 12:22:57.170914    2983 log.go:172] (0xc00015c840) Reply frame received for 1\nI0216 12:22:57.170941    2983 log.go:172] (0xc00015c840) (0xc00068ec80) Create stream\nI0216 12:22:57.170947    2983 log.go:172] (0xc00015c840) (0xc00068ec80) Stream added, broadcasting: 3\nI0216 12:22:57.172114    2983 log.go:172] (0xc00015c840) Reply frame received for 3\nI0216 12:22:57.172146    2983 log.go:172] (0xc00015c840) (0xc00068edc0) Create stream\nI0216 12:22:57.172155    2983 log.go:172] (0xc00015c840) (0xc00068edc0) Stream added, broadcasting: 5\nI0216 12:22:57.189520    2983 log.go:172] (0xc00015c840) Reply frame received for 5\nI0216 12:22:57.347517    2983 log.go:172] (0xc00015c840) Data frame received for 3\nI0216 12:22:57.347547    2983 log.go:172] (0xc00068ec80) (3) Data frame handling\nI0216 12:22:57.347560    2983 log.go:172] (0xc00068ec80) (3) Data frame sent\nI0216 12:22:57.434367    2983 log.go:172] (0xc00015c840) Data frame received for 1\nI0216 12:22:57.434398    2983 log.go:172] (0xc0007ce640) (1) Data frame handling\nI0216 12:22:57.434408    2983 log.go:172] (0xc0007ce640) (1) Data frame sent\nI0216 12:22:57.434420    2983 log.go:172] (0xc00015c840) (0xc0007ce640) Stream removed, broadcasting: 1\nI0216 12:22:57.434772    2983 log.go:172] (0xc00015c840) (0xc00068ec80) Stream removed, broadcasting: 3\nI0216 12:22:57.435419    2983 log.go:172] (0xc00015c840) (0xc00068edc0) Stream removed, broadcasting: 5\nI0216 12:22:57.435450    2983 log.go:172] (0xc00015c840) (0xc0007ce640) Stream removed, broadcasting: 1\nI0216 12:22:57.435462    2983 log.go:172] (0xc00015c840) (0xc00068ec80) Stream removed, broadcasting: 3\nI0216 12:22:57.435468    2983 log.go:172] (0xc00015c840) (0xc00068edc0) Stream removed, broadcasting: 5\n"
Feb 16 12:22:57.441: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 16 12:22:57.441: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 16 12:22:57.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 16 12:22:57.970: INFO: stderr: "I0216 12:22:57.578534    3006 log.go:172] (0xc0008682c0) (0xc00073c640) Create stream\nI0216 12:22:57.578670    3006 log.go:172] (0xc0008682c0) (0xc00073c640) Stream added, broadcasting: 1\nI0216 12:22:57.582986    3006 log.go:172] (0xc0008682c0) Reply frame received for 1\nI0216 12:22:57.583011    3006 log.go:172] (0xc0008682c0) (0xc000580c80) Create stream\nI0216 12:22:57.583020    3006 log.go:172] (0xc0008682c0) (0xc000580c80) Stream added, broadcasting: 3\nI0216 12:22:57.583904    3006 log.go:172] (0xc0008682c0) Reply frame received for 3\nI0216 12:22:57.583933    3006 log.go:172] (0xc0008682c0) (0xc00041a000) Create stream\nI0216 12:22:57.583939    3006 log.go:172] (0xc0008682c0) (0xc00041a000) Stream added, broadcasting: 5\nI0216 12:22:57.584666    3006 log.go:172] (0xc0008682c0) Reply frame received for 5\nI0216 12:22:57.762428    3006 log.go:172] (0xc0008682c0) Data frame received for 3\nI0216 12:22:57.762466    3006 log.go:172] (0xc000580c80) (3) Data frame handling\nI0216 12:22:57.762490    3006 log.go:172] (0xc000580c80) (3) Data frame sent\nI0216 12:22:57.952320    3006 log.go:172] (0xc0008682c0) (0xc000580c80) Stream removed, broadcasting: 3\nI0216 12:22:57.952456    3006 log.go:172] (0xc0008682c0) Data frame received for 1\nI0216 12:22:57.952475    3006 log.go:172] (0xc00073c640) (1) Data frame handling\nI0216 12:22:57.952490    3006 log.go:172] (0xc00073c640) (1) Data frame sent\nI0216 12:22:57.952498    3006 log.go:172] (0xc0008682c0) (0xc00073c640) Stream removed, broadcasting: 1\nI0216 12:22:57.952543    3006 log.go:172] (0xc0008682c0) (0xc00041a000) Stream removed, broadcasting: 5\nI0216 12:22:57.952618    3006 log.go:172] (0xc0008682c0) Go away received\nI0216 12:22:57.952760    3006 log.go:172] (0xc0008682c0) (0xc00073c640) Stream removed, broadcasting: 1\nI0216 12:22:57.952994    3006 log.go:172] (0xc0008682c0) (0xc000580c80) Stream removed, broadcasting: 3\nI0216 12:22:57.953041    3006 log.go:172] (0xc0008682c0) (0xc00041a000) Stream removed, broadcasting: 5\n"
Feb 16 12:22:57.970: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 16 12:22:57.970: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 16 12:22:57.970: INFO: Waiting for statefulset status.replicas updated to 0
Feb 16 12:22:58.007: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 16 12:23:08.111: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 16 12:23:08.111: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 16 12:23:08.111: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 16 12:23:08.247: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 16 12:23:08.247: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  }]
Feb 16 12:23:08.247: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:08.247: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:08.247: INFO: 
Feb 16 12:23:08.247: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 16 12:23:09.881: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 16 12:23:09.882: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  }]
Feb 16 12:23:09.882: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:09.882: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:09.882: INFO: 
Feb 16 12:23:09.882: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 16 12:23:10.900: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 16 12:23:10.900: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  }]
Feb 16 12:23:10.900: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:10.900: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:10.900: INFO: 
Feb 16 12:23:10.900: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 16 12:23:11.925: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 16 12:23:11.925: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  }]
Feb 16 12:23:11.925: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:11.925: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:11.925: INFO: 
Feb 16 12:23:11.925: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 16 12:23:12.954: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 16 12:23:12.954: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  }]
Feb 16 12:23:12.954: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:12.954: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:12.954: INFO: 
Feb 16 12:23:12.954: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 16 12:23:13.965: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 16 12:23:13.965: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  }]
Feb 16 12:23:13.965: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:13.965: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:13.965: INFO: 
Feb 16 12:23:13.965: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 16 12:23:14.987: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 16 12:23:14.987: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  }]
Feb 16 12:23:14.987: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:14.988: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:14.988: INFO: 
Feb 16 12:23:14.988: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 16 12:23:16.022: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 16 12:23:16.022: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  }]
Feb 16 12:23:16.023: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:16.023: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:16.023: INFO: 
Feb 16 12:23:16.023: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 16 12:23:17.049: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 16 12:23:17.049: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  }]
Feb 16 12:23:17.050: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:17.050: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:17.050: INFO: 
Feb 16 12:23:17.050: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 16 12:23:18.069: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 16 12:23:18.069: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:12 +0000 UTC  }]
Feb 16 12:23:18.069: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:18.069: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:22:33 +0000 UTC  }]
Feb 16 12:23:18.069: INFO: 
Feb 16 12:23:18.069: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-m2f4m
Feb 16 12:23:19.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:23:19.295: INFO: rc: 1
Feb 16 12:23:19.295: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001d7cd20 exit status 1   true [0xc0009cac10 0xc0009cac70 0xc0009cacb0] [0xc0009cac10 0xc0009cac70 0xc0009cacb0] [0xc0009cac58 0xc0009cac98] [0x935700 0x935700] 0xc0020bb140 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb 16 12:23:29.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:23:29.437: INFO: rc: 1
Feb 16 12:23:29.438: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000b7f950 exit status 1   true [0xc001a98060 0xc001a98078 0xc001a98090] [0xc001a98060 0xc001a98078 0xc001a98090] [0xc001a98070 0xc001a98088] [0x935700 0x935700] 0xc002399020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:23:39.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:23:39.620: INFO: rc: 1
Feb 16 12:23:39.620: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001387380 exit status 1   true [0xc00040bf88 0xc00040bfa0 0xc00110a000] [0xc00040bf88 0xc00040bfa0 0xc00110a000] [0xc00040bf98 0xc00040bfe8] [0x935700 0x935700] 0xc001758180 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:23:49.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:23:49.776: INFO: rc: 1
Feb 16 12:23:49.777: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000c39560 exit status 1   true [0xc000342378 0xc000342390 0xc0003423a8] [0xc000342378 0xc000342390 0xc0003423a8] [0xc000342388 0xc0003423a0] [0x935700 0x935700] 0xc001f2ea80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:23:59.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:23:59.977: INFO: rc: 1
Feb 16 12:23:59.977: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0013874d0 exit status 1   true [0xc00110a020 0xc00110a040 0xc00110a080] [0xc00110a020 0xc00110a040 0xc00110a080] [0xc00110a030 0xc00110a078] [0x935700 0x935700] 0xc001758420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:24:09.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:24:10.191: INFO: rc: 1
Feb 16 12:24:10.191: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0013876b0 exit status 1   true [0xc00110a088 0xc00110a0a8 0xc00110a0c0] [0xc00110a088 0xc00110a0a8 0xc00110a0c0] [0xc00110a0a0 0xc00110a0b8] [0x935700 0x935700] 0xc0017586c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:24:20.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:24:20.342: INFO: rc: 1
Feb 16 12:24:20.342: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000c39680 exit status 1   true [0xc0003423b0 0xc0003423c8 0xc0003423e0] [0xc0003423b0 0xc0003423c8 0xc0003423e0] [0xc0003423c0 0xc0003423d8] [0x935700 0x935700] 0xc001f2f740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:24:30.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:24:30.505: INFO: rc: 1
Feb 16 12:24:30.505: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000c397a0 exit status 1   true [0xc0003423e8 0xc000342400 0xc000342418] [0xc0003423e8 0xc000342400 0xc000342418] [0xc0003423f8 0xc000342410] [0x935700 0x935700] 0xc001f2ff20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:24:40.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:24:40.660: INFO: rc: 1
Feb 16 12:24:40.661: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000329410 exit status 1   true [0xc00040abc8 0xc00040ad18 0xc00040aea0] [0xc00040abc8 0xc00040ad18 0xc00040aea0] [0xc00040ac40 0xc00040ae78] [0x935700 0x935700] 0xc001f2e3c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:24:50.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:24:50.816: INFO: rc: 1
Feb 16 12:24:50.817: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0016742a0 exit status 1   true [0xc00000e010 0xc0009ca0f0 0xc0009ca250] [0xc00000e010 0xc0009ca0f0 0xc0009ca250] [0xc0009ca098 0xc0009ca1e0] [0x935700 0x935700] 0xc0020247e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:25:00.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:25:00.932: INFO: rc: 1
Feb 16 12:25:00.933: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0016744e0 exit status 1   true [0xc0009ca280 0xc0009ca390 0xc0009ca548] [0xc0009ca280 0xc0009ca390 0xc0009ca548] [0xc0009ca350 0xc0009ca528] [0x935700 0x935700] 0xc002024a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:25:10.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:25:11.141: INFO: rc: 1
Feb 16 12:25:11.141: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000329590 exit status 1   true [0xc00040af68 0xc00040b068 0xc00040b198] [0xc00040af68 0xc00040b068 0xc00040b198] [0xc00040b048 0xc00040b118] [0x935700 0x935700] 0xc001f2e8a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:25:21.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:25:21.275: INFO: rc: 1
Feb 16 12:25:21.275: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001958120 exit status 1   true [0xc00110a000 0xc00110a030 0xc00110a078] [0xc00110a000 0xc00110a030 0xc00110a078] [0xc00110a028 0xc00110a060] [0x935700 0x935700] 0xc002470660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:25:31.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:25:31.474: INFO: rc: 1
Feb 16 12:25:31.475: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000329830 exit status 1   true [0xc00040b1b0 0xc00040b2c8 0xc00040b4e8] [0xc00040b1b0 0xc00040b2c8 0xc00040b4e8] [0xc00040b278 0xc00040b450] [0x935700 0x935700] 0xc001f2ecc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:25:41.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:25:41.717: INFO: rc: 1
Feb 16 12:25:41.718: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0019582a0 exit status 1   true [0xc00110a080 0xc00110a0a0 0xc00110a0b8] [0xc00110a080 0xc00110a0a0 0xc00110a0b8] [0xc00110a098 0xc00110a0b0] [0x935700 0x935700] 0xc002470d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:25:51.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:25:51.916: INFO: rc: 1
Feb 16 12:25:51.916: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0003299e0 exit status 1   true [0xc00040b500 0xc00040b598 0xc00040b630] [0xc00040b500 0xc00040b598 0xc00040b630] [0xc00040b568 0xc00040b5e0] [0x935700 0x935700] 0xc001f2fe60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:26:01.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:26:02.036: INFO: rc: 1
Feb 16 12:26:02.037: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001674630 exit status 1   true [0xc0009ca5e0 0xc0009ca648 0xc0009ca720] [0xc0009ca5e0 0xc0009ca648 0xc0009ca720] [0xc0009ca638 0xc0009ca6a8] [0x935700 0x935700] 0xc002025380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:26:12.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:26:12.254: INFO: rc: 1
Feb 16 12:26:12.254: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0019583c0 exit status 1   true [0xc00110a0c0 0xc00110a0f0 0xc00110a130] [0xc00110a0c0 0xc00110a0f0 0xc00110a130] [0xc00110a0e8 0xc00110a110] [0x935700 0x935700] 0xc002471a40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:26:22.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:26:22.424: INFO: rc: 1
Feb 16 12:26:22.425: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0016747e0 exit status 1   true [0xc0009ca760 0xc0009ca7d8 0xc0009ca8b8] [0xc0009ca760 0xc0009ca7d8 0xc0009ca8b8] [0xc0009ca798 0xc0009ca858] [0x935700 0x935700] 0xc00210a000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:26:32.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:26:32.642: INFO: rc: 1
Feb 16 12:26:32.642: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000329b30 exit status 1   true [0xc00040b660 0xc00040b6b8 0xc00040b740] [0xc00040b660 0xc00040b6b8 0xc00040b740] [0xc00040b698 0xc00040b728] [0x935700 0x935700] 0xc0022385a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:26:42.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:26:42.735: INFO: rc: 1
Feb 16 12:26:42.736: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000329440 exit status 1   true [0xc0000e8218 0xc00040ac40 0xc00040ae78] [0xc0000e8218 0xc00040ac40 0xc00040ae78] [0xc00040ac00 0xc00040ae20] [0x935700 0x935700] 0xc0020247e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:26:52.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:26:52.993: INFO: rc: 1
Feb 16 12:26:52.993: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0019c0120 exit status 1   true [0xc00110a000 0xc00110a030 0xc00110a078] [0xc00110a000 0xc00110a030 0xc00110a078] [0xc00110a028 0xc00110a060] [0x935700 0x935700] 0xc001f2e3c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:27:02.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:27:03.132: INFO: rc: 1
Feb 16 12:27:03.133: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001958180 exit status 1   true [0xc0009ca028 0xc0009ca110 0xc0009ca280] [0xc0009ca028 0xc0009ca110 0xc0009ca280] [0xc0009ca0f0 0xc0009ca250] [0x935700 0x935700] 0xc002238660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:27:13.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:27:13.283: INFO: rc: 1
Feb 16 12:27:13.283: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000329620 exit status 1   true [0xc00040aea0 0xc00040b048 0xc00040b118] [0xc00040aea0 0xc00040b048 0xc00040b118] [0xc00040b000 0xc00040b090] [0x935700 0x935700] 0xc002024a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:27:23.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:27:23.427: INFO: rc: 1
Feb 16 12:27:23.427: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000329890 exit status 1   true [0xc00040b198 0xc00040b278 0xc00040b450] [0xc00040b198 0xc00040b278 0xc00040b450] [0xc00040b1b8 0xc00040b350] [0x935700 0x935700] 0xc002025380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:27:33.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:27:33.587: INFO: rc: 1
Feb 16 12:27:33.588: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001674300 exit status 1   true [0xc0003420b8 0xc0003420d0 0xc0003420f8] [0xc0003420b8 0xc0003420d0 0xc0003420f8] [0xc0003420c8 0xc0003420e8] [0x935700 0x935700] 0xc002470660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:27:43.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:27:43.744: INFO: rc: 1
Feb 16 12:27:43.744: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001674540 exit status 1   true [0xc000342108 0xc000342128 0xc000342168] [0xc000342108 0xc000342128 0xc000342168] [0xc000342120 0xc000342160] [0x935700 0x935700] 0xc002470d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:27:53.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:27:53.915: INFO: rc: 1
Feb 16 12:27:53.915: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000329a40 exit status 1   true [0xc00040b4e8 0xc00040b568 0xc00040b5e0] [0xc00040b4e8 0xc00040b568 0xc00040b5e0] [0xc00040b538 0xc00040b5c0] [0x935700 0x935700] 0xc00210a000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:28:03.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:28:04.121: INFO: rc: 1
Feb 16 12:28:04.122: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000329bf0 exit status 1   true [0xc00040b630 0xc00040b698 0xc00040b728] [0xc00040b630 0xc00040b698 0xc00040b728] [0xc00040b688 0xc00040b6c8] [0x935700 0x935700] 0xc00210a840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:28:14.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:28:14.229: INFO: rc: 1
Feb 16 12:28:14.229: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001958300 exit status 1   true [0xc0009ca338 0xc0009ca458 0xc0009ca5e0] [0xc0009ca338 0xc0009ca458 0xc0009ca5e0] [0xc0009ca390 0xc0009ca548] [0x935700 0x935700] 0xc002238900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 16 12:28:24.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m2f4m ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:28:24.449: INFO: rc: 1
Feb 16 12:28:24.449: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Feb 16 12:28:24.450: INFO: Scaling statefulset ss to 0
Feb 16 12:28:24.537: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 16 12:28:24.547: INFO: Deleting all statefulset in ns e2e-tests-statefulset-m2f4m
Feb 16 12:28:24.560: INFO: Scaling statefulset ss to 0
Feb 16 12:28:24.599: INFO: Waiting for statefulset status.replicas updated to 0
Feb 16 12:28:24.604: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:28:24.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-m2f4m" for this suite.
Feb 16 12:28:32.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:28:32.797: INFO: namespace: e2e-tests-statefulset-m2f4m, resource: bindings, ignored listing per whitelist
Feb 16 12:28:32.923: INFO: namespace e2e-tests-statefulset-m2f4m deletion completed in 8.205213788s

• [SLOW TEST:381.597 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:28:32.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-d8932d13-50b7-11ea-aa00-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 16 12:28:33.116: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d8942073-50b7-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-5kfw5" to be "success or failure"
Feb 16 12:28:33.133: INFO: Pod "pod-projected-secrets-d8942073-50b7-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 17.254187ms
Feb 16 12:28:35.147: INFO: Pod "pod-projected-secrets-d8942073-50b7-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031617365s
Feb 16 12:28:37.170: INFO: Pod "pod-projected-secrets-d8942073-50b7-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054446123s
Feb 16 12:28:39.343: INFO: Pod "pod-projected-secrets-d8942073-50b7-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.227405967s
Feb 16 12:28:41.716: INFO: Pod "pod-projected-secrets-d8942073-50b7-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.600211105s
Feb 16 12:28:43.974: INFO: Pod "pod-projected-secrets-d8942073-50b7-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.857661404s
Feb 16 12:28:46.493: INFO: Pod "pod-projected-secrets-d8942073-50b7-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.377239883s
STEP: Saw pod success
Feb 16 12:28:46.493: INFO: Pod "pod-projected-secrets-d8942073-50b7-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:28:46.508: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-d8942073-50b7-11ea-aa00-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Feb 16 12:28:46.998: INFO: Waiting for pod pod-projected-secrets-d8942073-50b7-11ea-aa00-0242ac110008 to disappear
Feb 16 12:28:47.007: INFO: Pod pod-projected-secrets-d8942073-50b7-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:28:47.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5kfw5" for this suite.
Feb 16 12:28:55.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:28:55.185: INFO: namespace: e2e-tests-projected-5kfw5, resource: bindings, ignored listing per whitelist
Feb 16 12:28:55.239: INFO: namespace e2e-tests-projected-5kfw5 deletion completed in 8.219155183s

• [SLOW TEST:22.316 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:28:55.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 16 12:28:55.389: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 16 12:28:55.483: INFO: Waiting for terminating namespaces to be deleted...
Feb 16 12:28:55.488: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 16 12:28:55.505: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 16 12:28:55.505: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 16 12:28:55.505: INFO: 	Container weave ready: true, restart count 0
Feb 16 12:28:55.505: INFO: 	Container weave-npc ready: true, restart count 0
Feb 16 12:28:55.505: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 16 12:28:55.505: INFO: 	Container coredns ready: true, restart count 0
Feb 16 12:28:55.505: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 16 12:28:55.505: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 16 12:28:55.505: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 16 12:28:55.505: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 16 12:28:55.505: INFO: 	Container coredns ready: true, restart count 0
Feb 16 12:28:55.505: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 16 12:28:55.505: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f3e1c6374e7562], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:28:56.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-pn279" for this suite.
Feb 16 12:29:02.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:29:02.836: INFO: namespace: e2e-tests-sched-pred-pn279, resource: bindings, ignored listing per whitelist
Feb 16 12:29:02.933: INFO: namespace e2e-tests-sched-pred-pn279 deletion completed in 6.270002546s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.694 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:29:02.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:29:03.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-2gj4b" for this suite.
Feb 16 12:29:09.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:29:09.677: INFO: namespace: e2e-tests-kubelet-test-2gj4b, resource: bindings, ignored listing per whitelist
Feb 16 12:29:09.780: INFO: namespace e2e-tests-kubelet-test-2gj4b deletion completed in 6.30748766s

• [SLOW TEST:6.845 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:29:09.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 16 12:29:41.977: INFO: Container started at 2020-02-16 12:29:18 +0000 UTC, pod became ready at 2020-02-16 12:29:41 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:29:41.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-rfgkx" for this suite.
Feb 16 12:30:06.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:30:06.193: INFO: namespace: e2e-tests-container-probe-rfgkx, resource: bindings, ignored listing per whitelist
Feb 16 12:30:06.292: INFO: namespace e2e-tests-container-probe-rfgkx deletion completed in 24.288504017s

• [SLOW TEST:56.512 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:30:06.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 16 12:30:06.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:30:17.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-kskbl" for this suite.
Feb 16 12:30:59.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:30:59.506: INFO: namespace: e2e-tests-pods-kskbl, resource: bindings, ignored listing per whitelist
Feb 16 12:30:59.530: INFO: namespace e2e-tests-pods-kskbl deletion completed in 42.274283676s

• [SLOW TEST:53.237 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:30:59.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-2k5h
STEP: Creating a pod to test atomic-volume-subpath
Feb 16 12:30:59.745: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-2k5h" in namespace "e2e-tests-subpath-vm5tn" to be "success or failure"
Feb 16 12:30:59.803: INFO: Pod "pod-subpath-test-projected-2k5h": Phase="Pending", Reason="", readiness=false. Elapsed: 57.966968ms
Feb 16 12:31:01.834: INFO: Pod "pod-subpath-test-projected-2k5h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088813591s
Feb 16 12:31:03.852: INFO: Pod "pod-subpath-test-projected-2k5h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106916238s
Feb 16 12:31:06.442: INFO: Pod "pod-subpath-test-projected-2k5h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.697584005s
Feb 16 12:31:08.485: INFO: Pod "pod-subpath-test-projected-2k5h": Phase="Pending", Reason="", readiness=false. Elapsed: 8.739798976s
Feb 16 12:31:10.504: INFO: Pod "pod-subpath-test-projected-2k5h": Phase="Pending", Reason="", readiness=false. Elapsed: 10.759617532s
Feb 16 12:31:12.537: INFO: Pod "pod-subpath-test-projected-2k5h": Phase="Pending", Reason="", readiness=false. Elapsed: 12.79264963s
Feb 16 12:31:14.799: INFO: Pod "pod-subpath-test-projected-2k5h": Phase="Pending", Reason="", readiness=false. Elapsed: 15.053965961s
Feb 16 12:31:16.823: INFO: Pod "pod-subpath-test-projected-2k5h": Phase="Pending", Reason="", readiness=false. Elapsed: 17.078570491s
Feb 16 12:31:18.836: INFO: Pod "pod-subpath-test-projected-2k5h": Phase="Running", Reason="", readiness=false. Elapsed: 19.091078422s
Feb 16 12:31:20.858: INFO: Pod "pod-subpath-test-projected-2k5h": Phase="Running", Reason="", readiness=false. Elapsed: 21.113243996s
Feb 16 12:31:22.884: INFO: Pod "pod-subpath-test-projected-2k5h": Phase="Running", Reason="", readiness=false. Elapsed: 23.139634314s
Feb 16 12:31:25.075: INFO: Pod "pod-subpath-test-projected-2k5h": Phase="Running", Reason="", readiness=false. Elapsed: 25.330592001s
Feb 16 12:31:27.088: INFO: Pod "pod-subpath-test-projected-2k5h": Phase="Running", Reason="", readiness=false. Elapsed: 27.343751375s
Feb 16 12:31:29.106: INFO: Pod "pod-subpath-test-projected-2k5h": Phase="Running", Reason="", readiness=false. Elapsed: 29.36159328s
Feb 16 12:31:31.119: INFO: Pod "pod-subpath-test-projected-2k5h": Phase="Running", Reason="", readiness=false. Elapsed: 31.374124915s
Feb 16 12:31:33.139: INFO: Pod "pod-subpath-test-projected-2k5h": Phase="Running", Reason="", readiness=false. Elapsed: 33.394719452s
Feb 16 12:31:35.181: INFO: Pod "pod-subpath-test-projected-2k5h": Phase="Running", Reason="", readiness=false. Elapsed: 35.436267638s
Feb 16 12:31:37.726: INFO: Pod "pod-subpath-test-projected-2k5h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.980763305s
STEP: Saw pod success
Feb 16 12:31:37.726: INFO: Pod "pod-subpath-test-projected-2k5h" satisfied condition "success or failure"
Feb 16 12:31:38.441: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-2k5h container test-container-subpath-projected-2k5h: 
STEP: delete the pod
Feb 16 12:31:38.867: INFO: Waiting for pod pod-subpath-test-projected-2k5h to disappear
Feb 16 12:31:38.882: INFO: Pod pod-subpath-test-projected-2k5h no longer exists
STEP: Deleting pod pod-subpath-test-projected-2k5h
Feb 16 12:31:38.882: INFO: Deleting pod "pod-subpath-test-projected-2k5h" in namespace "e2e-tests-subpath-vm5tn"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:31:38.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-vm5tn" for this suite.
Feb 16 12:31:47.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:31:47.161: INFO: namespace: e2e-tests-subpath-vm5tn, resource: bindings, ignored listing per whitelist
Feb 16 12:31:47.183: INFO: namespace e2e-tests-subpath-vm5tn deletion completed in 8.281423873s

• [SLOW TEST:47.653 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:31:47.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 16 12:31:47.443: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c67cdac-50b8-11ea-aa00-0242ac110008" in namespace "e2e-tests-downward-api-fvrcj" to be "success or failure"
Feb 16 12:31:47.536: INFO: Pod "downwardapi-volume-4c67cdac-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 92.376399ms
Feb 16 12:31:49.568: INFO: Pod "downwardapi-volume-4c67cdac-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124612061s
Feb 16 12:31:51.596: INFO: Pod "downwardapi-volume-4c67cdac-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152266063s
Feb 16 12:31:54.432: INFO: Pod "downwardapi-volume-4c67cdac-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.989113065s
Feb 16 12:31:56.482: INFO: Pod "downwardapi-volume-4c67cdac-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.039015886s
Feb 16 12:31:58.571: INFO: Pod "downwardapi-volume-4c67cdac-50b8-11ea-aa00-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 11.127303746s
Feb 16 12:32:00.582: INFO: Pod "downwardapi-volume-4c67cdac-50b8-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.139045333s
STEP: Saw pod success
Feb 16 12:32:00.582: INFO: Pod "downwardapi-volume-4c67cdac-50b8-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:32:00.589: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4c67cdac-50b8-11ea-aa00-0242ac110008 container client-container: 
STEP: delete the pod
Feb 16 12:32:01.242: INFO: Waiting for pod downwardapi-volume-4c67cdac-50b8-11ea-aa00-0242ac110008 to disappear
Feb 16 12:32:01.462: INFO: Pod downwardapi-volume-4c67cdac-50b8-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:32:01.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fvrcj" for this suite.
Feb 16 12:32:07.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:32:08.068: INFO: namespace: e2e-tests-downward-api-fvrcj, resource: bindings, ignored listing per whitelist
Feb 16 12:32:08.121: INFO: namespace e2e-tests-downward-api-fvrcj deletion completed in 6.411386393s

• [SLOW TEST:20.937 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:32:08.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Feb 16 12:32:08.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-ddhwj run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 16 12:32:20.596: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0216 12:32:19.273829    3657 log.go:172] (0xc000168790) (0xc0007ee320) Create stream\nI0216 12:32:19.273889    3657 log.go:172] (0xc000168790) (0xc0007ee320) Stream added, broadcasting: 1\nI0216 12:32:19.284600    3657 log.go:172] (0xc000168790) Reply frame received for 1\nI0216 12:32:19.284667    3657 log.go:172] (0xc000168790) (0xc000816000) Create stream\nI0216 12:32:19.284686    3657 log.go:172] (0xc000168790) (0xc000816000) Stream added, broadcasting: 3\nI0216 12:32:19.285726    3657 log.go:172] (0xc000168790) Reply frame received for 3\nI0216 12:32:19.285755    3657 log.go:172] (0xc000168790) (0xc0008160a0) Create stream\nI0216 12:32:19.285761    3657 log.go:172] (0xc000168790) (0xc0008160a0) Stream added, broadcasting: 5\nI0216 12:32:19.286530    3657 log.go:172] (0xc000168790) Reply frame received for 5\nI0216 12:32:19.286611    3657 log.go:172] (0xc000168790) (0xc0007ee3c0) Create stream\nI0216 12:32:19.286629    3657 log.go:172] (0xc000168790) (0xc0007ee3c0) Stream added, broadcasting: 7\nI0216 12:32:19.293783    3657 log.go:172] (0xc000168790) Reply frame received for 7\nI0216 12:32:19.294280    3657 log.go:172] (0xc000816000) (3) Writing data frame\nI0216 12:32:19.295230    3657 log.go:172] (0xc000816000) (3) Writing data frame\nI0216 12:32:19.324659    3657 log.go:172] (0xc000168790) Data frame received for 5\nI0216 12:32:19.324862    3657 log.go:172] (0xc0008160a0) (5) Data frame handling\nI0216 12:32:19.324906    3657 log.go:172] (0xc0008160a0) (5) Data frame sent\nI0216 12:32:19.330246    3657 log.go:172] (0xc000168790) Data frame received for 5\nI0216 12:32:19.330267    3657 log.go:172] (0xc0008160a0) (5) Data frame handling\nI0216 12:32:19.330289    3657 log.go:172] (0xc0008160a0) (5) Data frame sent\nI0216 12:32:20.519521    3657 log.go:172] (0xc000168790) Data frame received for 1\nI0216 12:32:20.519613    3657 log.go:172] (0xc0007ee320) (1) Data frame handling\nI0216 12:32:20.519634    3657 log.go:172] (0xc0007ee320) (1) Data frame sent\nI0216 12:32:20.519682    3657 log.go:172] (0xc000168790) (0xc0008160a0) Stream removed, broadcasting: 5\nI0216 12:32:20.519777    3657 log.go:172] (0xc000168790) (0xc0007ee320) Stream removed, broadcasting: 1\nI0216 12:32:20.519874    3657 log.go:172] (0xc000168790) (0xc000816000) Stream removed, broadcasting: 3\nI0216 12:32:20.520133    3657 log.go:172] (0xc000168790) (0xc0007ee3c0) Stream removed, broadcasting: 7\nI0216 12:32:20.520239    3657 log.go:172] (0xc000168790) (0xc0007ee320) Stream removed, broadcasting: 1\nI0216 12:32:20.520266    3657 log.go:172] (0xc000168790) (0xc000816000) Stream removed, broadcasting: 3\nI0216 12:32:20.520285    3657 log.go:172] (0xc000168790) (0xc0008160a0) Stream removed, broadcasting: 5\nI0216 12:32:20.520310    3657 log.go:172] (0xc000168790) (0xc0007ee3c0) Stream removed, broadcasting: 7\nI0216 12:32:20.520734    3657 log.go:172] (0xc000168790) Go away received\n"
Feb 16 12:32:20.596: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:32:22.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ddhwj" for this suite.
Feb 16 12:32:28.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:32:28.994: INFO: namespace: e2e-tests-kubectl-ddhwj, resource: bindings, ignored listing per whitelist
Feb 16 12:32:29.063: INFO: namespace e2e-tests-kubectl-ddhwj deletion completed in 6.184552434s

• [SLOW TEST:20.942 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:32:29.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Feb 16 12:32:29.362: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix642016961/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:32:29.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-84jt2" for this suite.
Feb 16 12:32:35.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:32:35.622: INFO: namespace: e2e-tests-kubectl-84jt2, resource: bindings, ignored listing per whitelist
Feb 16 12:32:35.666: INFO: namespace e2e-tests-kubectl-84jt2 deletion completed in 6.155597782s

• [SLOW TEST:6.603 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:32:35.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:33:35.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-mrzh8" for this suite.
Feb 16 12:33:59.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:33:59.948: INFO: namespace: e2e-tests-container-probe-mrzh8, resource: bindings, ignored listing per whitelist
Feb 16 12:34:00.044: INFO: namespace e2e-tests-container-probe-mrzh8 deletion completed in 24.200077371s

• [SLOW TEST:84.378 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:34:00.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 16 12:34:00.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:34:10.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-t2gkd" for this suite.
Feb 16 12:34:54.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:34:54.464: INFO: namespace: e2e-tests-pods-t2gkd, resource: bindings, ignored listing per whitelist
Feb 16 12:34:54.730: INFO: namespace e2e-tests-pods-t2gkd deletion completed in 44.408005483s

• [SLOW TEST:54.686 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:34:54.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-bc2477d9-50b8-11ea-aa00-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 16 12:34:54.912: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bc253484-50b8-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-flqft" to be "success or failure"
Feb 16 12:34:54.931: INFO: Pod "pod-projected-secrets-bc253484-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 18.710657ms
Feb 16 12:34:56.954: INFO: Pod "pod-projected-secrets-bc253484-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04179363s
Feb 16 12:34:58.974: INFO: Pod "pod-projected-secrets-bc253484-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061562079s
Feb 16 12:35:01.071: INFO: Pod "pod-projected-secrets-bc253484-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159020913s
Feb 16 12:35:03.126: INFO: Pod "pod-projected-secrets-bc253484-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.213749586s
Feb 16 12:35:05.153: INFO: Pod "pod-projected-secrets-bc253484-50b8-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.240774757s
STEP: Saw pod success
Feb 16 12:35:05.153: INFO: Pod "pod-projected-secrets-bc253484-50b8-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:35:05.162: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-bc253484-50b8-11ea-aa00-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Feb 16 12:35:05.320: INFO: Waiting for pod pod-projected-secrets-bc253484-50b8-11ea-aa00-0242ac110008 to disappear
Feb 16 12:35:05.334: INFO: Pod pod-projected-secrets-bc253484-50b8-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:35:05.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-flqft" for this suite.
Feb 16 12:35:11.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:35:11.463: INFO: namespace: e2e-tests-projected-flqft, resource: bindings, ignored listing per whitelist
Feb 16 12:35:11.549: INFO: namespace e2e-tests-projected-flqft deletion completed in 6.201998555s

• [SLOW TEST:16.818 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:35:11.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 16 12:35:11.993: INFO: Waiting up to 5m0s for pod "pod-c65228dc-50b8-11ea-aa00-0242ac110008" in namespace "e2e-tests-emptydir-h6bmf" to be "success or failure"
Feb 16 12:35:12.019: INFO: Pod "pod-c65228dc-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 25.06788ms
Feb 16 12:35:14.170: INFO: Pod "pod-c65228dc-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176547752s
Feb 16 12:35:16.189: INFO: Pod "pod-c65228dc-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195449643s
Feb 16 12:35:18.356: INFO: Pod "pod-c65228dc-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.362734075s
Feb 16 12:35:20.373: INFO: Pod "pod-c65228dc-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.379185055s
Feb 16 12:35:22.449: INFO: Pod "pod-c65228dc-50b8-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.455721005s
STEP: Saw pod success
Feb 16 12:35:22.450: INFO: Pod "pod-c65228dc-50b8-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:35:22.485: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c65228dc-50b8-11ea-aa00-0242ac110008 container test-container: 
STEP: delete the pod
Feb 16 12:35:22.801: INFO: Waiting for pod pod-c65228dc-50b8-11ea-aa00-0242ac110008 to disappear
Feb 16 12:35:22.810: INFO: Pod pod-c65228dc-50b8-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:35:22.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-h6bmf" for this suite.
Feb 16 12:35:28.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:35:28.956: INFO: namespace: e2e-tests-emptydir-h6bmf, resource: bindings, ignored listing per whitelist
Feb 16 12:35:29.062: INFO: namespace e2e-tests-emptydir-h6bmf deletion completed in 6.243312469s

• [SLOW TEST:17.513 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:35:29.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-d09f06a6-50b8-11ea-aa00-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 16 12:35:29.355: INFO: Waiting up to 5m0s for pod "pod-secrets-d0a3ac26-50b8-11ea-aa00-0242ac110008" in namespace "e2e-tests-secrets-jvddk" to be "success or failure"
Feb 16 12:35:29.391: INFO: Pod "pod-secrets-d0a3ac26-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 35.453516ms
Feb 16 12:35:31.403: INFO: Pod "pod-secrets-d0a3ac26-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047381346s
Feb 16 12:35:33.434: INFO: Pod "pod-secrets-d0a3ac26-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0789791s
Feb 16 12:35:35.871: INFO: Pod "pod-secrets-d0a3ac26-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.515748641s
Feb 16 12:35:37.884: INFO: Pod "pod-secrets-d0a3ac26-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.528747515s
Feb 16 12:35:39.904: INFO: Pod "pod-secrets-d0a3ac26-50b8-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.54871011s
STEP: Saw pod success
Feb 16 12:35:39.904: INFO: Pod "pod-secrets-d0a3ac26-50b8-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:35:39.912: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d0a3ac26-50b8-11ea-aa00-0242ac110008 container secret-env-test: 
STEP: delete the pod
Feb 16 12:35:39.980: INFO: Waiting for pod pod-secrets-d0a3ac26-50b8-11ea-aa00-0242ac110008 to disappear
Feb 16 12:35:40.075: INFO: Pod pod-secrets-d0a3ac26-50b8-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:35:40.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-jvddk" for this suite.
Feb 16 12:35:46.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:35:46.385: INFO: namespace: e2e-tests-secrets-jvddk, resource: bindings, ignored listing per whitelist
Feb 16 12:35:46.395: INFO: namespace e2e-tests-secrets-jvddk deletion completed in 6.279963673s

• [SLOW TEST:17.333 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:35:46.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 16 12:35:46.696: INFO: Waiting up to 5m0s for pod "pod-db00ab93-50b8-11ea-aa00-0242ac110008" in namespace "e2e-tests-emptydir-2k25v" to be "success or failure"
Feb 16 12:35:46.831: INFO: Pod "pod-db00ab93-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 134.530365ms
Feb 16 12:35:48.844: INFO: Pod "pod-db00ab93-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148330652s
Feb 16 12:35:50.864: INFO: Pod "pod-db00ab93-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167906192s
Feb 16 12:35:53.189: INFO: Pod "pod-db00ab93-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.492690925s
Feb 16 12:35:55.209: INFO: Pod "pod-db00ab93-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.512654416s
Feb 16 12:35:57.299: INFO: Pod "pod-db00ab93-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.602875856s
Feb 16 12:35:59.317: INFO: Pod "pod-db00ab93-50b8-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.620400367s
STEP: Saw pod success
Feb 16 12:35:59.317: INFO: Pod "pod-db00ab93-50b8-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:35:59.323: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-db00ab93-50b8-11ea-aa00-0242ac110008 container test-container: 
STEP: delete the pod
Feb 16 12:35:59.820: INFO: Waiting for pod pod-db00ab93-50b8-11ea-aa00-0242ac110008 to disappear
Feb 16 12:35:59.837: INFO: Pod pod-db00ab93-50b8-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:35:59.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2k25v" for this suite.
Feb 16 12:36:05.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:36:06.031: INFO: namespace: e2e-tests-emptydir-2k25v, resource: bindings, ignored listing per whitelist
Feb 16 12:36:06.043: INFO: namespace e2e-tests-emptydir-2k25v deletion completed in 6.194336778s

• [SLOW TEST:19.648 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:36:06.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 16 12:36:06.459: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e6b62f6f-50b8-11ea-aa00-0242ac110008" in namespace "e2e-tests-downward-api-kpqwh" to be "success or failure"
Feb 16 12:36:06.479: INFO: Pod "downwardapi-volume-e6b62f6f-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 20.015402ms
Feb 16 12:36:08.683: INFO: Pod "downwardapi-volume-e6b62f6f-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224318062s
Feb 16 12:36:10.695: INFO: Pod "downwardapi-volume-e6b62f6f-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23651452s
Feb 16 12:36:12.713: INFO: Pod "downwardapi-volume-e6b62f6f-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.254454044s
Feb 16 12:36:14.734: INFO: Pod "downwardapi-volume-e6b62f6f-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.275323145s
Feb 16 12:36:16.761: INFO: Pod "downwardapi-volume-e6b62f6f-50b8-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.301998523s
Feb 16 12:36:18.782: INFO: Pod "downwardapi-volume-e6b62f6f-50b8-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.323569077s
STEP: Saw pod success
Feb 16 12:36:18.782: INFO: Pod "downwardapi-volume-e6b62f6f-50b8-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:36:18.792: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e6b62f6f-50b8-11ea-aa00-0242ac110008 container client-container: 
STEP: delete the pod
Feb 16 12:36:19.057: INFO: Waiting for pod downwardapi-volume-e6b62f6f-50b8-11ea-aa00-0242ac110008 to disappear
Feb 16 12:36:19.073: INFO: Pod downwardapi-volume-e6b62f6f-50b8-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:36:19.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-kpqwh" for this suite.
Feb 16 12:36:25.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:36:25.297: INFO: namespace: e2e-tests-downward-api-kpqwh, resource: bindings, ignored listing per whitelist
Feb 16 12:36:25.424: INFO: namespace e2e-tests-downward-api-kpqwh deletion completed in 6.334195615s

• [SLOW TEST:19.380 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:36:25.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 16 12:36:25.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-752kf'
Feb 16 12:36:25.914: INFO: stderr: ""
Feb 16 12:36:25.914: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb 16 12:36:40.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-752kf -o json'
Feb 16 12:36:41.140: INFO: stderr: ""
Feb 16 12:36:41.141: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-16T12:36:25Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-752kf\",\n        \"resourceVersion\": \"21867871\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-752kf/pods/e2e-test-nginx-pod\",\n        \"uid\": \"f25c170c-50b8-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-54tn4\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-54tn4\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-54tn4\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-16T12:36:25Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-16T12:36:36Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-16T12:36:36Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-16T12:36:25Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://f0e910c16851bee2b9b9edadae11aa20810eaab89cd6fdd56094a8f72786359b\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-16T12:36:35Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-16T12:36:25Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 16 12:36:41.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-752kf'
Feb 16 12:36:41.714: INFO: stderr: ""
Feb 16 12:36:41.714: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Feb 16 12:36:41.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-752kf'
Feb 16 12:36:51.066: INFO: stderr: ""
Feb 16 12:36:51.066: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:36:51.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-752kf" for this suite.
Feb 16 12:36:57.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:36:57.210: INFO: namespace: e2e-tests-kubectl-752kf, resource: bindings, ignored listing per whitelist
Feb 16 12:36:57.231: INFO: namespace e2e-tests-kubectl-752kf deletion completed in 6.147536183s

• [SLOW TEST:31.807 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:36:57.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-052793df-50b9-11ea-aa00-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 16 12:36:57.411: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0529b92d-50b9-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-dqp5v" to be "success or failure"
Feb 16 12:36:57.429: INFO: Pod "pod-projected-secrets-0529b92d-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 17.647604ms
Feb 16 12:36:59.440: INFO: Pod "pod-projected-secrets-0529b92d-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029316369s
Feb 16 12:37:01.452: INFO: Pod "pod-projected-secrets-0529b92d-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040415349s
Feb 16 12:37:03.958: INFO: Pod "pod-projected-secrets-0529b92d-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.546967306s
Feb 16 12:37:06.283: INFO: Pod "pod-projected-secrets-0529b92d-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.872254515s
Feb 16 12:37:08.299: INFO: Pod "pod-projected-secrets-0529b92d-50b9-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.88734695s
STEP: Saw pod success
Feb 16 12:37:08.299: INFO: Pod "pod-projected-secrets-0529b92d-50b9-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:37:08.304: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-0529b92d-50b9-11ea-aa00-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Feb 16 12:37:09.111: INFO: Waiting for pod pod-projected-secrets-0529b92d-50b9-11ea-aa00-0242ac110008 to disappear
Feb 16 12:37:09.153: INFO: Pod pod-projected-secrets-0529b92d-50b9-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:37:09.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dqp5v" for this suite.
Feb 16 12:37:15.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:37:15.261: INFO: namespace: e2e-tests-projected-dqp5v, resource: bindings, ignored listing per whitelist
Feb 16 12:37:15.410: INFO: namespace e2e-tests-projected-dqp5v deletion completed in 6.250092402s

• [SLOW TEST:18.178 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:37:15.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 16 12:37:15.921: INFO: Waiting up to 5m0s for pod "pod-1030098f-50b9-11ea-aa00-0242ac110008" in namespace "e2e-tests-emptydir-pqfxb" to be "success or failure"
Feb 16 12:37:15.928: INFO: Pod "pod-1030098f-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.271388ms
Feb 16 12:37:18.070: INFO: Pod "pod-1030098f-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148655241s
Feb 16 12:37:20.088: INFO: Pod "pod-1030098f-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166146208s
Feb 16 12:37:22.557: INFO: Pod "pod-1030098f-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.635285884s
Feb 16 12:37:24.598: INFO: Pod "pod-1030098f-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.676998964s
Feb 16 12:37:26.634: INFO: Pod "pod-1030098f-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.712101813s
Feb 16 12:37:28.651: INFO: Pod "pod-1030098f-50b9-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.729359149s
STEP: Saw pod success
Feb 16 12:37:28.651: INFO: Pod "pod-1030098f-50b9-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:37:28.657: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1030098f-50b9-11ea-aa00-0242ac110008 container test-container: 
STEP: delete the pod
Feb 16 12:37:28.712: INFO: Waiting for pod pod-1030098f-50b9-11ea-aa00-0242ac110008 to disappear
Feb 16 12:37:28.730: INFO: Pod pod-1030098f-50b9-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:37:28.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-pqfxb" for this suite.
Feb 16 12:37:34.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:37:34.976: INFO: namespace: e2e-tests-emptydir-pqfxb, resource: bindings, ignored listing per whitelist
Feb 16 12:37:35.006: INFO: namespace e2e-tests-emptydir-pqfxb deletion completed in 6.269425243s

• [SLOW TEST:19.596 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:37:35.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-1bb3547a-50b9-11ea-aa00-0242ac110008
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-1bb3547a-50b9-11ea-aa00-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:37:47.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-2ts8j" for this suite.
Feb 16 12:38:11.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:38:11.561: INFO: namespace: e2e-tests-configmap-2ts8j, resource: bindings, ignored listing per whitelist
Feb 16 12:38:11.667: INFO: namespace e2e-tests-configmap-2ts8j deletion completed in 24.265189508s

• [SLOW TEST:36.660 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:38:11.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 16 12:38:12.248: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 16 12:38:17.277: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:38:17.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-xq4jk" for this suite.
Feb 16 12:38:25.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:38:25.638: INFO: namespace: e2e-tests-replication-controller-xq4jk, resource: bindings, ignored listing per whitelist
Feb 16 12:38:25.701: INFO: namespace e2e-tests-replication-controller-xq4jk deletion completed in 8.289197606s

• [SLOW TEST:14.033 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:38:25.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 16 12:38:47.505: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 12:38:47.585: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 12:38:49.585: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 12:38:49.606: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 12:38:51.585: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 12:38:51.637: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 12:38:53.586: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 12:38:53.600: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 12:38:55.585: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 12:38:55.601: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 12:38:57.585: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 12:38:57.605: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 12:38:59.585: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 12:38:59.608: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 12:39:01.585: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 12:39:01.599: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 12:39:03.585: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 12:39:03.636: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 12:39:05.585: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 12:39:05.627: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 12:39:07.585: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 12:39:07.615: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 16 12:39:09.585: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 16 12:39:09.603: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:39:09.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-sf4bs" for this suite.
Feb 16 12:39:34.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:39:34.908: INFO: namespace: e2e-tests-container-lifecycle-hook-sf4bs, resource: bindings, ignored listing per whitelist
Feb 16 12:39:34.999: INFO: namespace e2e-tests-container-lifecycle-hook-sf4bs deletion completed in 25.355017551s

• [SLOW TEST:69.298 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:39:35.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-t24fz
Feb 16 12:39:45.221: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-t24fz
STEP: checking the pod's current state and verifying that restartCount is present
Feb 16 12:39:45.228: INFO: Initial restart count of pod liveness-http is 0
Feb 16 12:40:09.943: INFO: Restart count of pod e2e-tests-container-probe-t24fz/liveness-http is now 1 (24.71468281s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:40:09.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-t24fz" for this suite.
Feb 16 12:40:16.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:40:16.356: INFO: namespace: e2e-tests-container-probe-t24fz, resource: bindings, ignored listing per whitelist
Feb 16 12:40:16.483: INFO: namespace e2e-tests-container-probe-t24fz deletion completed in 6.32737819s

• [SLOW TEST:41.483 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:40:16.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-pvks
STEP: Creating a pod to test atomic-volume-subpath
Feb 16 12:40:16.770: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-pvks" in namespace "e2e-tests-subpath-fmhrs" to be "success or failure"
Feb 16 12:40:16.782: INFO: Pod "pod-subpath-test-secret-pvks": Phase="Pending", Reason="", readiness=false. Elapsed: 12.423166ms
Feb 16 12:40:18.804: INFO: Pod "pod-subpath-test-secret-pvks": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034240398s
Feb 16 12:40:20.826: INFO: Pod "pod-subpath-test-secret-pvks": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055622194s
Feb 16 12:40:24.102: INFO: Pod "pod-subpath-test-secret-pvks": Phase="Pending", Reason="", readiness=false. Elapsed: 7.332307085s
Feb 16 12:40:26.116: INFO: Pod "pod-subpath-test-secret-pvks": Phase="Pending", Reason="", readiness=false. Elapsed: 9.34596503s
Feb 16 12:40:28.133: INFO: Pod "pod-subpath-test-secret-pvks": Phase="Pending", Reason="", readiness=false. Elapsed: 11.363389271s
Feb 16 12:40:30.147: INFO: Pod "pod-subpath-test-secret-pvks": Phase="Pending", Reason="", readiness=false. Elapsed: 13.376778316s
Feb 16 12:40:32.171: INFO: Pod "pod-subpath-test-secret-pvks": Phase="Pending", Reason="", readiness=false. Elapsed: 15.400780484s
Feb 16 12:40:34.190: INFO: Pod "pod-subpath-test-secret-pvks": Phase="Running", Reason="", readiness=false. Elapsed: 17.420451893s
Feb 16 12:40:36.205: INFO: Pod "pod-subpath-test-secret-pvks": Phase="Running", Reason="", readiness=false. Elapsed: 19.435607574s
Feb 16 12:40:38.233: INFO: Pod "pod-subpath-test-secret-pvks": Phase="Running", Reason="", readiness=false. Elapsed: 21.462886969s
Feb 16 12:40:40.252: INFO: Pod "pod-subpath-test-secret-pvks": Phase="Running", Reason="", readiness=false. Elapsed: 23.482512134s
Feb 16 12:40:42.284: INFO: Pod "pod-subpath-test-secret-pvks": Phase="Running", Reason="", readiness=false. Elapsed: 25.51363561s
Feb 16 12:40:44.303: INFO: Pod "pod-subpath-test-secret-pvks": Phase="Running", Reason="", readiness=false. Elapsed: 27.533450979s
Feb 16 12:40:46.320: INFO: Pod "pod-subpath-test-secret-pvks": Phase="Running", Reason="", readiness=false. Elapsed: 29.549970864s
Feb 16 12:40:48.400: INFO: Pod "pod-subpath-test-secret-pvks": Phase="Running", Reason="", readiness=false. Elapsed: 31.630314185s
Feb 16 12:40:50.417: INFO: Pod "pod-subpath-test-secret-pvks": Phase="Running", Reason="", readiness=false. Elapsed: 33.646953898s
Feb 16 12:40:52.435: INFO: Pod "pod-subpath-test-secret-pvks": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.665089063s
STEP: Saw pod success
Feb 16 12:40:52.435: INFO: Pod "pod-subpath-test-secret-pvks" satisfied condition "success or failure"
Feb 16 12:40:52.452: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-pvks container test-container-subpath-secret-pvks: 
STEP: delete the pod
Feb 16 12:40:53.669: INFO: Waiting for pod pod-subpath-test-secret-pvks to disappear
Feb 16 12:40:53.805: INFO: Pod pod-subpath-test-secret-pvks no longer exists
STEP: Deleting pod pod-subpath-test-secret-pvks
Feb 16 12:40:53.805: INFO: Deleting pod "pod-subpath-test-secret-pvks" in namespace "e2e-tests-subpath-fmhrs"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:40:53.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-fmhrs" for this suite.
Feb 16 12:41:01.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:41:02.031: INFO: namespace: e2e-tests-subpath-fmhrs, resource: bindings, ignored listing per whitelist
Feb 16 12:41:02.099: INFO: namespace e2e-tests-subpath-fmhrs deletion completed in 8.274830511s

• [SLOW TEST:45.615 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:41:02.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: Gathering metrics
W0216 12:41:06.994529       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 16 12:41:06.994: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:41:06.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-fh7dq" for this suite.
Feb 16 12:41:13.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:41:13.979: INFO: namespace: e2e-tests-gc-fh7dq, resource: bindings, ignored listing per whitelist
Feb 16 12:41:14.005: INFO: namespace e2e-tests-gc-fh7dq deletion completed in 6.912751514s

• [SLOW TEST:11.905 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:41:14.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Feb 16 12:41:14.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 16 12:41:14.492: INFO: stderr: ""
Feb 16 12:41:14.493: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:41:14.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dcwpd" for this suite.
Feb 16 12:41:20.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:41:20.761: INFO: namespace: e2e-tests-kubectl-dcwpd, resource: bindings, ignored listing per whitelist
Feb 16 12:41:20.817: INFO: namespace e2e-tests-kubectl-dcwpd deletion completed in 6.307562534s

• [SLOW TEST:6.811 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:41:20.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 16 12:41:21.439: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a27c5574-50b9-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-j8r5b" to be "success or failure"
Feb 16 12:41:21.618: INFO: Pod "downwardapi-volume-a27c5574-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 178.192165ms
Feb 16 12:41:23.655: INFO: Pod "downwardapi-volume-a27c5574-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215441696s
Feb 16 12:41:25.683: INFO: Pod "downwardapi-volume-a27c5574-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243408735s
Feb 16 12:41:28.317: INFO: Pod "downwardapi-volume-a27c5574-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.877358011s
Feb 16 12:41:30.331: INFO: Pod "downwardapi-volume-a27c5574-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.891330006s
Feb 16 12:41:33.681: INFO: Pod "downwardapi-volume-a27c5574-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.242071762s
Feb 16 12:41:35.696: INFO: Pod "downwardapi-volume-a27c5574-50b9-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.256409662s
STEP: Saw pod success
Feb 16 12:41:35.696: INFO: Pod "downwardapi-volume-a27c5574-50b9-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:41:36.005: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a27c5574-50b9-11ea-aa00-0242ac110008 container client-container: 
STEP: delete the pod
Feb 16 12:41:36.176: INFO: Waiting for pod downwardapi-volume-a27c5574-50b9-11ea-aa00-0242ac110008 to disappear
Feb 16 12:41:36.198: INFO: Pod downwardapi-volume-a27c5574-50b9-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:41:36.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-j8r5b" for this suite.
Feb 16 12:41:42.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:41:42.280: INFO: namespace: e2e-tests-projected-j8r5b, resource: bindings, ignored listing per whitelist
Feb 16 12:41:42.387: INFO: namespace e2e-tests-projected-j8r5b deletion completed in 6.179434684s

• [SLOW TEST:21.570 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:41:42.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 16 12:41:42.637: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af1b614f-50b9-11ea-aa00-0242ac110008" in namespace "e2e-tests-downward-api-4fjgb" to be "success or failure"
Feb 16 12:41:42.658: INFO: Pod "downwardapi-volume-af1b614f-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 20.86734ms
Feb 16 12:41:44.785: INFO: Pod "downwardapi-volume-af1b614f-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147589228s
Feb 16 12:41:46.798: INFO: Pod "downwardapi-volume-af1b614f-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16100414s
Feb 16 12:41:49.288: INFO: Pod "downwardapi-volume-af1b614f-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.651278273s
Feb 16 12:41:51.378: INFO: Pod "downwardapi-volume-af1b614f-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.741138596s
Feb 16 12:41:53.386: INFO: Pod "downwardapi-volume-af1b614f-50b9-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.749266233s
STEP: Saw pod success
Feb 16 12:41:53.386: INFO: Pod "downwardapi-volume-af1b614f-50b9-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:41:53.390: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-af1b614f-50b9-11ea-aa00-0242ac110008 container client-container: 
STEP: delete the pod
Feb 16 12:41:54.336: INFO: Waiting for pod downwardapi-volume-af1b614f-50b9-11ea-aa00-0242ac110008 to disappear
Feb 16 12:41:54.349: INFO: Pod downwardapi-volume-af1b614f-50b9-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:41:54.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4fjgb" for this suite.
Feb 16 12:42:00.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:42:00.831: INFO: namespace: e2e-tests-downward-api-4fjgb, resource: bindings, ignored listing per whitelist
Feb 16 12:42:00.869: INFO: namespace e2e-tests-downward-api-4fjgb deletion completed in 6.504188156s

• [SLOW TEST:18.481 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:42:00.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-ba2a2b49-50b9-11ea-aa00-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 16 12:42:01.086: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ba2ad3d8-50b9-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-9bbf8" to be "success or failure"
Feb 16 12:42:01.095: INFO: Pod "pod-projected-configmaps-ba2ad3d8-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.697467ms
Feb 16 12:42:03.130: INFO: Pod "pod-projected-configmaps-ba2ad3d8-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044573881s
Feb 16 12:42:05.148: INFO: Pod "pod-projected-configmaps-ba2ad3d8-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062608417s
Feb 16 12:42:07.829: INFO: Pod "pod-projected-configmaps-ba2ad3d8-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.743044152s
Feb 16 12:42:09.843: INFO: Pod "pod-projected-configmaps-ba2ad3d8-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.757807132s
Feb 16 12:42:11.871: INFO: Pod "pod-projected-configmaps-ba2ad3d8-50b9-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.785271628s
STEP: Saw pod success
Feb 16 12:42:11.871: INFO: Pod "pod-projected-configmaps-ba2ad3d8-50b9-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:42:11.886: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-ba2ad3d8-50b9-11ea-aa00-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 16 12:42:12.372: INFO: Waiting for pod pod-projected-configmaps-ba2ad3d8-50b9-11ea-aa00-0242ac110008 to disappear
Feb 16 12:42:12.414: INFO: Pod pod-projected-configmaps-ba2ad3d8-50b9-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:42:12.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9bbf8" for this suite.
Feb 16 12:42:18.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:42:18.783: INFO: namespace: e2e-tests-projected-9bbf8, resource: bindings, ignored listing per whitelist
Feb 16 12:42:18.834: INFO: namespace e2e-tests-projected-9bbf8 deletion completed in 6.246364486s

• [SLOW TEST:17.964 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:42:18.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 16 12:42:19.289: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 16 12:42:19.373: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 16 12:42:24.396: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 16 12:42:30.426: INFO: Creating deployment "test-rolling-update-deployment"
Feb 16 12:42:30.477: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 16 12:42:30.731: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 16 12:42:32.762: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 16 12:42:32.779: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453751, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453751, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453751, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453750, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 12:42:34.810: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453751, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453751, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453751, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453750, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 12:42:37.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453751, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453751, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453751, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453750, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 12:42:39.048: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453751, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453751, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453751, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453750, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 12:42:40.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453751, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453751, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453751, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717453750, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 12:42:42.792: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 16 12:42:42.881: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-z8z7d,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-z8z7d/deployments/test-rolling-update-deployment,UID:cba9d9d9-50b9-11ea-a994-fa163e34d433,ResourceVersion:21868704,Generation:1,CreationTimestamp:2020-02-16 12:42:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-16 12:42:31 +0000 UTC 2020-02-16 12:42:31 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-16 12:42:41 +0000 UTC 2020-02-16 12:42:30 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 16 12:42:42.888: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-z8z7d,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-z8z7d/replicasets/test-rolling-update-deployment-75db98fb4c,UID:cbe36a33-50b9-11ea-a994-fa163e34d433,ResourceVersion:21868695,Generation:1,CreationTimestamp:2020-02-16 12:42:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment cba9d9d9-50b9-11ea-a994-fa163e34d433 0xc002725667 0xc002725668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 16 12:42:42.888: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 16 12:42:42.889: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-z8z7d,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-z8z7d/replicasets/test-rolling-update-controller,UID:c5066c82-50b9-11ea-a994-fa163e34d433,ResourceVersion:21868703,Generation:2,CreationTimestamp:2020-02-16 12:42:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment cba9d9d9-50b9-11ea-a994-fa163e34d433 0xc0027254cf 0xc0027254e0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 16 12:42:42.904: INFO: Pod "test-rolling-update-deployment-75db98fb4c-lqbxm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-lqbxm,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-z8z7d,SelfLink:/api/v1/namespaces/e2e-tests-deployment-z8z7d/pods/test-rolling-update-deployment-75db98fb4c-lqbxm,UID:cbf2693b-50b9-11ea-a994-fa163e34d433,ResourceVersion:21868694,Generation:0,CreationTimestamp:2020-02-16 12:42:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c cbe36a33-50b9-11ea-a994-fa163e34d433 0xc002732f17 0xc002732f18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-86t7v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-86t7v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-86t7v true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002732f80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002732fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:42:31 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:42:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:42:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 12:42:31 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-16 12:42:31 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-16 12:42:39 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://c533e43215ec4efe401307c72447d946c4326204a62be74dfe6245002d2bf587}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:42:42.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-z8z7d" for this suite.
Feb 16 12:42:55.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:42:55.125: INFO: namespace: e2e-tests-deployment-z8z7d, resource: bindings, ignored listing per whitelist
Feb 16 12:42:55.174: INFO: namespace e2e-tests-deployment-z8z7d deletion completed in 12.257637681s

• [SLOW TEST:36.341 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:42:55.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-da869aac-50b9-11ea-aa00-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 16 12:42:55.573: INFO: Waiting up to 5m0s for pod "pod-secrets-daa2498c-50b9-11ea-aa00-0242ac110008" in namespace "e2e-tests-secrets-64rkh" to be "success or failure"
Feb 16 12:42:55.612: INFO: Pod "pod-secrets-daa2498c-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 38.502529ms
Feb 16 12:42:57.896: INFO: Pod "pod-secrets-daa2498c-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322998799s
Feb 16 12:42:59.911: INFO: Pod "pod-secrets-daa2498c-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338120294s
Feb 16 12:43:01.927: INFO: Pod "pod-secrets-daa2498c-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.353988612s
Feb 16 12:43:03.982: INFO: Pod "pod-secrets-daa2498c-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.408782169s
Feb 16 12:43:05.996: INFO: Pod "pod-secrets-daa2498c-50b9-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.422516799s
STEP: Saw pod success
Feb 16 12:43:05.996: INFO: Pod "pod-secrets-daa2498c-50b9-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:43:06.002: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-daa2498c-50b9-11ea-aa00-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 16 12:43:06.581: INFO: Waiting for pod pod-secrets-daa2498c-50b9-11ea-aa00-0242ac110008 to disappear
Feb 16 12:43:06.593: INFO: Pod pod-secrets-daa2498c-50b9-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:43:06.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-64rkh" for this suite.
Feb 16 12:43:14.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:43:14.665: INFO: namespace: e2e-tests-secrets-64rkh, resource: bindings, ignored listing per whitelist
Feb 16 12:43:14.731: INFO: namespace e2e-tests-secrets-64rkh deletion completed in 8.128507867s
STEP: Destroying namespace "e2e-tests-secret-namespace-c7wtl" for this suite.
Feb 16 12:43:20.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:43:20.936: INFO: namespace: e2e-tests-secret-namespace-c7wtl, resource: bindings, ignored listing per whitelist
Feb 16 12:43:21.074: INFO: namespace e2e-tests-secret-namespace-c7wtl deletion completed in 6.342613587s

• [SLOW TEST:25.899 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:43:21.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 16 12:43:21.259: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e9f43ad6-50b9-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-6662t" to be "success or failure"
Feb 16 12:43:21.279: INFO: Pod "downwardapi-volume-e9f43ad6-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.303375ms
Feb 16 12:43:23.625: INFO: Pod "downwardapi-volume-e9f43ad6-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.365834338s
Feb 16 12:43:25.654: INFO: Pod "downwardapi-volume-e9f43ad6-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.394661524s
Feb 16 12:43:28.942: INFO: Pod "downwardapi-volume-e9f43ad6-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.682190152s
Feb 16 12:43:30.973: INFO: Pod "downwardapi-volume-e9f43ad6-50b9-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.713783012s
Feb 16 12:43:32.994: INFO: Pod "downwardapi-volume-e9f43ad6-50b9-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.734744822s
STEP: Saw pod success
Feb 16 12:43:32.994: INFO: Pod "downwardapi-volume-e9f43ad6-50b9-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:43:32.999: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e9f43ad6-50b9-11ea-aa00-0242ac110008 container client-container: 
STEP: delete the pod
Feb 16 12:43:33.617: INFO: Waiting for pod downwardapi-volume-e9f43ad6-50b9-11ea-aa00-0242ac110008 to disappear
Feb 16 12:43:33.997: INFO: Pod downwardapi-volume-e9f43ad6-50b9-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:43:33.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6662t" for this suite.
Feb 16 12:43:40.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:43:40.304: INFO: namespace: e2e-tests-projected-6662t, resource: bindings, ignored listing per whitelist
Feb 16 12:43:40.414: INFO: namespace e2e-tests-projected-6662t deletion completed in 6.386443324s

• [SLOW TEST:19.339 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:43:40.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Feb 16 12:43:50.837: INFO: Pod pod-hostip-f582bebc-50b9-11ea-aa00-0242ac110008 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:43:50.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-zzx86" for this suite.
Feb 16 12:44:14.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:44:14.975: INFO: namespace: e2e-tests-pods-zzx86, resource: bindings, ignored listing per whitelist
Feb 16 12:44:15.030: INFO: namespace e2e-tests-pods-zzx86 deletion completed in 24.186972211s

• [SLOW TEST:34.616 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:44:15.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 16 12:44:15.385: INFO: Waiting up to 5m0s for pod "downward-api-0a2a98cd-50ba-11ea-aa00-0242ac110008" in namespace "e2e-tests-downward-api-fjb6p" to be "success or failure"
Feb 16 12:44:15.416: INFO: Pod "downward-api-0a2a98cd-50ba-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 30.277424ms
Feb 16 12:44:17.563: INFO: Pod "downward-api-0a2a98cd-50ba-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17705923s
Feb 16 12:44:19.608: INFO: Pod "downward-api-0a2a98cd-50ba-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222202375s
Feb 16 12:44:22.667: INFO: Pod "downward-api-0a2a98cd-50ba-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.281001913s
Feb 16 12:44:24.735: INFO: Pod "downward-api-0a2a98cd-50ba-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.348997038s
Feb 16 12:44:26.751: INFO: Pod "downward-api-0a2a98cd-50ba-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.365329986s
Feb 16 12:44:28.801: INFO: Pod "downward-api-0a2a98cd-50ba-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.415400671s
STEP: Saw pod success
Feb 16 12:44:28.801: INFO: Pod "downward-api-0a2a98cd-50ba-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:44:28.810: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-0a2a98cd-50ba-11ea-aa00-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 16 12:44:29.023: INFO: Waiting for pod downward-api-0a2a98cd-50ba-11ea-aa00-0242ac110008 to disappear
Feb 16 12:44:29.032: INFO: Pod downward-api-0a2a98cd-50ba-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:44:29.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fjb6p" for this suite.
Feb 16 12:44:35.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:44:35.318: INFO: namespace: e2e-tests-downward-api-fjb6p, resource: bindings, ignored listing per whitelist
Feb 16 12:44:35.339: INFO: namespace e2e-tests-downward-api-fjb6p deletion completed in 6.285333549s

• [SLOW TEST:20.308 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:44:35.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-4b2gr
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Feb 16 12:44:35.549: INFO: Found 0 stateful pods, waiting for 3
Feb 16 12:44:45.584: INFO: Found 2 stateful pods, waiting for 3
Feb 16 12:44:55.572: INFO: Found 2 stateful pods, waiting for 3
Feb 16 12:45:05.565: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 12:45:05.565: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 12:45:05.565: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 16 12:45:15.589: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 12:45:15.589: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 12:45:15.589: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 12:45:15.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4b2gr ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 16 12:45:16.428: INFO: stderr: "I0216 12:45:15.938487    3805 log.go:172] (0xc000742160) (0xc0005fe500) Create stream\nI0216 12:45:15.939253    3805 log.go:172] (0xc000742160) (0xc0005fe500) Stream added, broadcasting: 1\nI0216 12:45:15.946776    3805 log.go:172] (0xc000742160) Reply frame received for 1\nI0216 12:45:15.946863    3805 log.go:172] (0xc000742160) (0xc0006c0aa0) Create stream\nI0216 12:45:15.946922    3805 log.go:172] (0xc000742160) (0xc0006c0aa0) Stream added, broadcasting: 3\nI0216 12:45:15.949860    3805 log.go:172] (0xc000742160) Reply frame received for 3\nI0216 12:45:15.949885    3805 log.go:172] (0xc000742160) (0xc0006c0be0) Create stream\nI0216 12:45:15.949896    3805 log.go:172] (0xc000742160) (0xc0006c0be0) Stream added, broadcasting: 5\nI0216 12:45:15.951557    3805 log.go:172] (0xc000742160) Reply frame received for 5\nI0216 12:45:16.177826    3805 log.go:172] (0xc000742160) Data frame received for 3\nI0216 12:45:16.178126    3805 log.go:172] (0xc0006c0aa0) (3) Data frame handling\nI0216 12:45:16.178169    3805 log.go:172] (0xc0006c0aa0) (3) Data frame sent\nI0216 12:45:16.410158    3805 log.go:172] (0xc000742160) Data frame received for 1\nI0216 12:45:16.410257    3805 log.go:172] (0xc0005fe500) (1) Data frame handling\nI0216 12:45:16.410295    3805 log.go:172] (0xc0005fe500) (1) Data frame sent\nI0216 12:45:16.410800    3805 log.go:172] (0xc000742160) (0xc0005fe500) Stream removed, broadcasting: 1\nI0216 12:45:16.413777    3805 log.go:172] (0xc000742160) (0xc0006c0aa0) Stream removed, broadcasting: 3\nI0216 12:45:16.413846    3805 log.go:172] (0xc000742160) (0xc0006c0be0) Stream removed, broadcasting: 5\nI0216 12:45:16.413903    3805 log.go:172] (0xc000742160) (0xc0005fe500) Stream removed, broadcasting: 1\nI0216 12:45:16.413974    3805 log.go:172] (0xc000742160) (0xc0006c0aa0) Stream removed, broadcasting: 3\nI0216 12:45:16.413991    3805 log.go:172] (0xc000742160) (0xc0006c0be0) Stream removed, broadcasting: 5\n"
Feb 16 12:45:16.429: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 16 12:45:16.429: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 16 12:45:16.661: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 16 12:45:26.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4b2gr ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:45:27.376: INFO: stderr: "I0216 12:45:26.953096    3827 log.go:172] (0xc0006da370) (0xc0006fe640) Create stream\nI0216 12:45:26.953322    3827 log.go:172] (0xc0006da370) (0xc0006fe640) Stream added, broadcasting: 1\nI0216 12:45:26.963973    3827 log.go:172] (0xc0006da370) Reply frame received for 1\nI0216 12:45:26.964064    3827 log.go:172] (0xc0006da370) (0xc000784dc0) Create stream\nI0216 12:45:26.964084    3827 log.go:172] (0xc0006da370) (0xc000784dc0) Stream added, broadcasting: 3\nI0216 12:45:26.965615    3827 log.go:172] (0xc0006da370) Reply frame received for 3\nI0216 12:45:26.965680    3827 log.go:172] (0xc0006da370) (0xc0006fe6e0) Create stream\nI0216 12:45:26.965696    3827 log.go:172] (0xc0006da370) (0xc0006fe6e0) Stream added, broadcasting: 5\nI0216 12:45:26.967142    3827 log.go:172] (0xc0006da370) Reply frame received for 5\nI0216 12:45:27.160269    3827 log.go:172] (0xc0006da370) Data frame received for 3\nI0216 12:45:27.160349    3827 log.go:172] (0xc000784dc0) (3) Data frame handling\nI0216 12:45:27.160369    3827 log.go:172] (0xc000784dc0) (3) Data frame sent\nI0216 12:45:27.368856    3827 log.go:172] (0xc0006da370) (0xc000784dc0) Stream removed, broadcasting: 3\nI0216 12:45:27.369025    3827 log.go:172] (0xc0006da370) Data frame received for 1\nI0216 12:45:27.369056    3827 log.go:172] (0xc0006fe640) (1) Data frame handling\nI0216 12:45:27.369177    3827 log.go:172] (0xc0006fe640) (1) Data frame sent\nI0216 12:45:27.369253    3827 log.go:172] (0xc0006da370) (0xc0006fe6e0) Stream removed, broadcasting: 5\nI0216 12:45:27.369336    3827 log.go:172] (0xc0006da370) (0xc0006fe640) Stream removed, broadcasting: 1\nI0216 12:45:27.369417    3827 log.go:172] (0xc0006da370) Go away received\nI0216 12:45:27.369616    3827 log.go:172] (0xc0006da370) (0xc0006fe640) Stream removed, broadcasting: 1\nI0216 12:45:27.369630    3827 log.go:172] (0xc0006da370) (0xc000784dc0) Stream removed, broadcasting: 3\nI0216 12:45:27.369637    3827 log.go:172] (0xc0006da370) (0xc0006fe6e0) Stream removed, broadcasting: 5\n"
Feb 16 12:45:27.376: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 16 12:45:27.376: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 16 12:45:27.787: INFO: Waiting for StatefulSet e2e-tests-statefulset-4b2gr/ss2 to complete update
Feb 16 12:45:27.787: INFO: Waiting for Pod e2e-tests-statefulset-4b2gr/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 12:45:27.787: INFO: Waiting for Pod e2e-tests-statefulset-4b2gr/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 12:45:27.787: INFO: Waiting for Pod e2e-tests-statefulset-4b2gr/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 12:45:37.828: INFO: Waiting for StatefulSet e2e-tests-statefulset-4b2gr/ss2 to complete update
Feb 16 12:45:37.828: INFO: Waiting for Pod e2e-tests-statefulset-4b2gr/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 12:45:37.828: INFO: Waiting for Pod e2e-tests-statefulset-4b2gr/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 12:45:47.885: INFO: Waiting for StatefulSet e2e-tests-statefulset-4b2gr/ss2 to complete update
Feb 16 12:45:47.885: INFO: Waiting for Pod e2e-tests-statefulset-4b2gr/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 12:45:47.885: INFO: Waiting for Pod e2e-tests-statefulset-4b2gr/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 12:45:58.561: INFO: Waiting for StatefulSet e2e-tests-statefulset-4b2gr/ss2 to complete update
Feb 16 12:45:58.561: INFO: Waiting for Pod e2e-tests-statefulset-4b2gr/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 12:46:07.819: INFO: Waiting for StatefulSet e2e-tests-statefulset-4b2gr/ss2 to complete update
Feb 16 12:46:07.819: INFO: Waiting for Pod e2e-tests-statefulset-4b2gr/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 12:46:17.815: INFO: Waiting for StatefulSet e2e-tests-statefulset-4b2gr/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 16 12:46:27.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4b2gr ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 16 12:46:28.716: INFO: stderr: "I0216 12:46:28.087511    3849 log.go:172] (0xc0008202c0) (0xc000714640) Create stream\nI0216 12:46:28.087793    3849 log.go:172] (0xc0008202c0) (0xc000714640) Stream added, broadcasting: 1\nI0216 12:46:28.115412    3849 log.go:172] (0xc0008202c0) Reply frame received for 1\nI0216 12:46:28.115548    3849 log.go:172] (0xc0008202c0) (0xc000792d20) Create stream\nI0216 12:46:28.115581    3849 log.go:172] (0xc0008202c0) (0xc000792d20) Stream added, broadcasting: 3\nI0216 12:46:28.117398    3849 log.go:172] (0xc0008202c0) Reply frame received for 3\nI0216 12:46:28.117428    3849 log.go:172] (0xc0008202c0) (0xc000792e60) Create stream\nI0216 12:46:28.117438    3849 log.go:172] (0xc0008202c0) (0xc000792e60) Stream added, broadcasting: 5\nI0216 12:46:28.119895    3849 log.go:172] (0xc0008202c0) Reply frame received for 5\nI0216 12:46:28.429293    3849 log.go:172] (0xc0008202c0) Data frame received for 3\nI0216 12:46:28.429373    3849 log.go:172] (0xc000792d20) (3) Data frame handling\nI0216 12:46:28.429397    3849 log.go:172] (0xc000792d20) (3) Data frame sent\nI0216 12:46:28.698507    3849 log.go:172] (0xc0008202c0) (0xc000792d20) Stream removed, broadcasting: 3\nI0216 12:46:28.698711    3849 log.go:172] (0xc0008202c0) Data frame received for 1\nI0216 12:46:28.698751    3849 log.go:172] (0xc000714640) (1) Data frame handling\nI0216 12:46:28.698776    3849 log.go:172] (0xc000714640) (1) Data frame sent\nI0216 12:46:28.698831    3849 log.go:172] (0xc0008202c0) (0xc000792e60) Stream removed, broadcasting: 5\nI0216 12:46:28.699177    3849 log.go:172] (0xc0008202c0) (0xc000714640) Stream removed, broadcasting: 1\nI0216 12:46:28.699403    3849 log.go:172] (0xc0008202c0) Go away received\nI0216 12:46:28.700554    3849 log.go:172] (0xc0008202c0) (0xc000714640) Stream removed, broadcasting: 1\nI0216 12:46:28.700609    3849 log.go:172] (0xc0008202c0) (0xc000792d20) Stream removed, broadcasting: 3\nI0216 12:46:28.700628    3849 log.go:172] (0xc0008202c0) (0xc000792e60) Stream removed, broadcasting: 5\n"
Feb 16 12:46:28.716: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 16 12:46:28.716: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 16 12:46:38.794: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 16 12:46:49.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4b2gr ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 16 12:46:50.008: INFO: stderr: "I0216 12:46:49.481909    3872 log.go:172] (0xc0001386e0) (0xc000619400) Create stream\nI0216 12:46:49.482036    3872 log.go:172] (0xc0001386e0) (0xc000619400) Stream added, broadcasting: 1\nI0216 12:46:49.487977    3872 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0216 12:46:49.488017    3872 log.go:172] (0xc0001386e0) (0xc000734000) Create stream\nI0216 12:46:49.488027    3872 log.go:172] (0xc0001386e0) (0xc000734000) Stream added, broadcasting: 3\nI0216 12:46:49.489166    3872 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0216 12:46:49.489191    3872 log.go:172] (0xc0001386e0) (0xc000368000) Create stream\nI0216 12:46:49.489211    3872 log.go:172] (0xc0001386e0) (0xc000368000) Stream added, broadcasting: 5\nI0216 12:46:49.491113    3872 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0216 12:46:49.790696    3872 log.go:172] (0xc0001386e0) Data frame received for 3\nI0216 12:46:49.790804    3872 log.go:172] (0xc000734000) (3) Data frame handling\nI0216 12:46:49.790844    3872 log.go:172] (0xc000734000) (3) Data frame sent\nI0216 12:46:49.995943    3872 log.go:172] (0xc0001386e0) (0xc000734000) Stream removed, broadcasting: 3\nI0216 12:46:49.996125    3872 log.go:172] (0xc0001386e0) Data frame received for 1\nI0216 12:46:49.996150    3872 log.go:172] (0xc000619400) (1) Data frame handling\nI0216 12:46:49.996169    3872 log.go:172] (0xc000619400) (1) Data frame sent\nI0216 12:46:49.996186    3872 log.go:172] (0xc0001386e0) (0xc000368000) Stream removed, broadcasting: 5\nI0216 12:46:49.996256    3872 log.go:172] (0xc0001386e0) (0xc000619400) Stream removed, broadcasting: 1\nI0216 12:46:49.996287    3872 log.go:172] (0xc0001386e0) Go away received\nI0216 12:46:49.996758    3872 log.go:172] (0xc0001386e0) (0xc000619400) Stream removed, broadcasting: 1\nI0216 12:46:49.996791    3872 log.go:172] (0xc0001386e0) (0xc000734000) Stream removed, broadcasting: 3\nI0216 12:46:49.996806    3872 log.go:172] (0xc0001386e0) (0xc000368000) Stream removed, broadcasting: 5\n"
Feb 16 12:46:50.008: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 16 12:46:50.008: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 16 12:47:00.085: INFO: Waiting for StatefulSet e2e-tests-statefulset-4b2gr/ss2 to complete update
Feb 16 12:47:00.085: INFO: Waiting for Pod e2e-tests-statefulset-4b2gr/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 16 12:47:00.085: INFO: Waiting for Pod e2e-tests-statefulset-4b2gr/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 16 12:47:10.115: INFO: Waiting for StatefulSet e2e-tests-statefulset-4b2gr/ss2 to complete update
Feb 16 12:47:10.115: INFO: Waiting for Pod e2e-tests-statefulset-4b2gr/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 16 12:47:10.115: INFO: Waiting for Pod e2e-tests-statefulset-4b2gr/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 16 12:47:20.120: INFO: Waiting for StatefulSet e2e-tests-statefulset-4b2gr/ss2 to complete update
Feb 16 12:47:20.120: INFO: Waiting for Pod e2e-tests-statefulset-4b2gr/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 16 12:47:20.120: INFO: Waiting for Pod e2e-tests-statefulset-4b2gr/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 16 12:47:30.408: INFO: Waiting for StatefulSet e2e-tests-statefulset-4b2gr/ss2 to complete update
Feb 16 12:47:30.409: INFO: Waiting for Pod e2e-tests-statefulset-4b2gr/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 16 12:47:40.126: INFO: Waiting for StatefulSet e2e-tests-statefulset-4b2gr/ss2 to complete update
Feb 16 12:47:40.126: INFO: Waiting for Pod e2e-tests-statefulset-4b2gr/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 16 12:47:50.115: INFO: Waiting for StatefulSet e2e-tests-statefulset-4b2gr/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 16 12:48:00.111: INFO: Deleting all statefulset in ns e2e-tests-statefulset-4b2gr
Feb 16 12:48:00.116: INFO: Scaling statefulset ss2 to 0
Feb 16 12:48:30.172: INFO: Waiting for statefulset status.replicas updated to 0
Feb 16 12:48:30.179: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:48:30.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-4b2gr" for this suite.
Feb 16 12:48:38.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:48:38.439: INFO: namespace: e2e-tests-statefulset-4b2gr, resource: bindings, ignored listing per whitelist
Feb 16 12:48:38.669: INFO: namespace e2e-tests-statefulset-4b2gr deletion completed in 8.368126317s

• [SLOW TEST:243.329 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:48:38.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 16 12:48:49.166: INFO: Waiting up to 5m0s for pod "client-envvars-ad608db9-50ba-11ea-aa00-0242ac110008" in namespace "e2e-tests-pods-bzddq" to be "success or failure"
Feb 16 12:48:49.185: INFO: Pod "client-envvars-ad608db9-50ba-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 18.83078ms
Feb 16 12:48:51.212: INFO: Pod "client-envvars-ad608db9-50ba-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046226597s
Feb 16 12:48:53.287: INFO: Pod "client-envvars-ad608db9-50ba-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120726508s
Feb 16 12:48:55.665: INFO: Pod "client-envvars-ad608db9-50ba-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.499145952s
Feb 16 12:48:57.679: INFO: Pod "client-envvars-ad608db9-50ba-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.51269703s
Feb 16 12:48:59.691: INFO: Pod "client-envvars-ad608db9-50ba-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.525083314s
STEP: Saw pod success
Feb 16 12:48:59.691: INFO: Pod "client-envvars-ad608db9-50ba-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:48:59.694: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-ad608db9-50ba-11ea-aa00-0242ac110008 container env3cont: 
STEP: delete the pod
Feb 16 12:49:00.311: INFO: Waiting for pod client-envvars-ad608db9-50ba-11ea-aa00-0242ac110008 to disappear
Feb 16 12:49:00.499: INFO: Pod client-envvars-ad608db9-50ba-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:49:00.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-bzddq" for this suite.
Feb 16 12:49:42.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:49:42.749: INFO: namespace: e2e-tests-pods-bzddq, resource: bindings, ignored listing per whitelist
Feb 16 12:49:42.815: INFO: namespace e2e-tests-pods-bzddq deletion completed in 42.297550688s

• [SLOW TEST:64.144 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:49:42.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 16 12:49:43.018: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd756ec5-50ba-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-pnr4l" to be "success or failure"
Feb 16 12:49:43.034: INFO: Pod "downwardapi-volume-cd756ec5-50ba-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.882444ms
Feb 16 12:49:45.045: INFO: Pod "downwardapi-volume-cd756ec5-50ba-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02729067s
Feb 16 12:49:47.073: INFO: Pod "downwardapi-volume-cd756ec5-50ba-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054408494s
Feb 16 12:49:49.183: INFO: Pod "downwardapi-volume-cd756ec5-50ba-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.16506682s
Feb 16 12:49:51.223: INFO: Pod "downwardapi-volume-cd756ec5-50ba-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.205238904s
Feb 16 12:49:53.234: INFO: Pod "downwardapi-volume-cd756ec5-50ba-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.215596871s
STEP: Saw pod success
Feb 16 12:49:53.234: INFO: Pod "downwardapi-volume-cd756ec5-50ba-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:49:53.239: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-cd756ec5-50ba-11ea-aa00-0242ac110008 container client-container: 
STEP: delete the pod
Feb 16 12:49:54.244: INFO: Waiting for pod downwardapi-volume-cd756ec5-50ba-11ea-aa00-0242ac110008 to disappear
Feb 16 12:49:54.288: INFO: Pod downwardapi-volume-cd756ec5-50ba-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:49:54.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pnr4l" for this suite.
Feb 16 12:50:00.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:50:00.684: INFO: namespace: e2e-tests-projected-pnr4l, resource: bindings, ignored listing per whitelist
Feb 16 12:50:00.709: INFO: namespace e2e-tests-projected-pnr4l deletion completed in 6.384369654s

• [SLOW TEST:17.894 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:50:00.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-qq487
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-qq487
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-qq487
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-qq487
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-qq487
Feb 16 12:50:13.179: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-qq487, name: ss-0, uid: df360e8d-50ba-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Feb 16 12:50:22.501: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-qq487, name: ss-0, uid: df360e8d-50ba-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb 16 12:50:22.642: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-qq487, name: ss-0, uid: df360e8d-50ba-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb 16 12:50:22.804: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-qq487
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-qq487
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-qq487 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 16 12:50:35.932: INFO: Deleting all statefulset in ns e2e-tests-statefulset-qq487
Feb 16 12:50:35.941: INFO: Scaling statefulset ss to 0
Feb 16 12:50:46.046: INFO: Waiting for statefulset status.replicas updated to 0
Feb 16 12:50:46.073: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:50:46.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-qq487" for this suite.
Feb 16 12:50:54.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:50:55.064: INFO: namespace: e2e-tests-statefulset-qq487, resource: bindings, ignored listing per whitelist
Feb 16 12:50:55.542: INFO: namespace e2e-tests-statefulset-qq487 deletion completed in 9.392271382s

• [SLOW TEST:54.832 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:50:55.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 16 12:53:59.287: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:53:59.356: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:01.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:01.382: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:03.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:03.389: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:05.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:05.377: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:07.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:07.371: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:09.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:09.385: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:11.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:11.415: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:13.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:13.451: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:15.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:15.394: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:17.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:17.367: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:19.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:19.391: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:21.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:21.398: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:23.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:23.371: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:25.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:25.391: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:27.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:27.369: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:29.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:31.568: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:33.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:33.397: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:35.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:35.382: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:37.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:37.378: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:39.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:39.375: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:41.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:41.387: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:43.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:43.381: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:45.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:45.380: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:47.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:47.383: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:49.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:49.386: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:51.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:51.379: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:53.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:53.385: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:55.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:55.378: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:57.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:57.826: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:54:59.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:54:59.371: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:01.361: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:01.453: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:03.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:03.386: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:05.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:05.379: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:07.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:07.397: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:09.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:09.436: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:11.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:11.524: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:13.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:13.372: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:15.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:15.376: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:17.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:17.375: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:19.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:19.377: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:21.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:21.400: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:23.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:23.377: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:25.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:25.374: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:27.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:27.377: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:29.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:29.379: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:31.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:31.369: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:33.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:33.375: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:35.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:35.383: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:37.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:37.373: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:39.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:39.379: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:41.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:41.371: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:43.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:43.764: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:45.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:45.372: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:47.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:47.389: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:49.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:49.378: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:51.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:51.370: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 16 12:55:53.357: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 16 12:55:53.379: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:55:53.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-st76m" for this suite.
Feb 16 12:56:17.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:56:17.542: INFO: namespace: e2e-tests-container-lifecycle-hook-st76m, resource: bindings, ignored listing per whitelist
Feb 16 12:56:17.931: INFO: namespace e2e-tests-container-lifecycle-hook-st76m deletion completed in 24.54545933s

• [SLOW TEST:322.389 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:56:17.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-kq82p
I0216 12:56:18.613716       9 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-kq82p, replica count: 1
I0216 12:56:19.664639       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 12:56:20.665254       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 12:56:21.665658       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 12:56:22.666825       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 12:56:23.668154       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 12:56:24.668698       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 12:56:25.669094       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 12:56:26.669801       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 12:56:27.670349       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 12:56:28.671322       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 12:56:29.672377       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0216 12:56:30.690706       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 16 12:56:30.836: INFO: Created: latency-svc-sdbh9
Feb 16 12:56:31.309: INFO: Got endpoints: latency-svc-sdbh9 [518.782274ms]
Feb 16 12:56:31.587: INFO: Created: latency-svc-gcvhw
Feb 16 12:56:31.601: INFO: Got endpoints: latency-svc-gcvhw [290.449712ms]
Feb 16 12:56:31.921: INFO: Created: latency-svc-79wxd
Feb 16 12:56:31.921: INFO: Got endpoints: latency-svc-79wxd [609.708465ms]
Feb 16 12:56:31.974: INFO: Created: latency-svc-m7kwt
Feb 16 12:56:32.165: INFO: Got endpoints: latency-svc-m7kwt [854.284272ms]
Feb 16 12:56:32.216: INFO: Created: latency-svc-r99qk
Feb 16 12:56:32.229: INFO: Got endpoints: latency-svc-r99qk [917.380065ms]
Feb 16 12:56:32.436: INFO: Created: latency-svc-b8n7p
Feb 16 12:56:32.453: INFO: Got endpoints: latency-svc-b8n7p [1.141231633s]
Feb 16 12:56:32.718: INFO: Created: latency-svc-8j4hc
Feb 16 12:56:32.736: INFO: Got endpoints: latency-svc-8j4hc [1.423838839s]
Feb 16 12:56:32.941: INFO: Created: latency-svc-bz7ct
Feb 16 12:56:32.997: INFO: Got endpoints: latency-svc-bz7ct [1.684830439s]
Feb 16 12:56:33.198: INFO: Created: latency-svc-wmx4x
Feb 16 12:56:33.238: INFO: Created: latency-svc-sgbsq
Feb 16 12:56:33.240: INFO: Got endpoints: latency-svc-wmx4x [1.927737094s]
Feb 16 12:56:33.398: INFO: Got endpoints: latency-svc-sgbsq [2.085802824s]
Feb 16 12:56:33.448: INFO: Created: latency-svc-h859r
Feb 16 12:56:33.448: INFO: Got endpoints: latency-svc-h859r [2.13712824s]
Feb 16 12:56:33.656: INFO: Created: latency-svc-wgprl
Feb 16 12:56:33.681: INFO: Got endpoints: latency-svc-wgprl [2.369016088s]
Feb 16 12:56:33.946: INFO: Created: latency-svc-5rlb5
Feb 16 12:56:33.975: INFO: Got endpoints: latency-svc-5rlb5 [2.663515999s]
Feb 16 12:56:34.222: INFO: Created: latency-svc-2s8k4
Feb 16 12:56:34.257: INFO: Got endpoints: latency-svc-2s8k4 [2.944197638s]
Feb 16 12:56:34.516: INFO: Created: latency-svc-627qd
Feb 16 12:56:34.517: INFO: Got endpoints: latency-svc-627qd [3.204949211s]
Feb 16 12:56:34.718: INFO: Created: latency-svc-qxnd8
Feb 16 12:56:34.956: INFO: Got endpoints: latency-svc-qxnd8 [3.64412821s]
Feb 16 12:56:34.963: INFO: Created: latency-svc-bcxvm
Feb 16 12:56:34.987: INFO: Got endpoints: latency-svc-bcxvm [3.386131464s]
Feb 16 12:56:35.035: INFO: Created: latency-svc-tckgl
Feb 16 12:56:35.049: INFO: Got endpoints: latency-svc-tckgl [3.1280376s]
Feb 16 12:56:35.272: INFO: Created: latency-svc-2d8ls
Feb 16 12:56:35.283: INFO: Got endpoints: latency-svc-2d8ls [3.117132998s]
Feb 16 12:56:35.466: INFO: Created: latency-svc-qlrzf
Feb 16 12:56:35.528: INFO: Got endpoints: latency-svc-qlrzf [3.298369055s]
Feb 16 12:56:35.547: INFO: Created: latency-svc-7kdxv
Feb 16 12:56:35.641: INFO: Got endpoints: latency-svc-7kdxv [3.187124438s]
Feb 16 12:56:35.679: INFO: Created: latency-svc-h4rft
Feb 16 12:56:35.686: INFO: Got endpoints: latency-svc-h4rft [2.949657985s]
Feb 16 12:56:35.865: INFO: Created: latency-svc-t8gtd
Feb 16 12:56:35.875: INFO: Got endpoints: latency-svc-t8gtd [2.877373175s]
Feb 16 12:56:35.923: INFO: Created: latency-svc-wcm2c
Feb 16 12:56:36.025: INFO: Got endpoints: latency-svc-wcm2c [2.785131719s]
Feb 16 12:56:36.052: INFO: Created: latency-svc-7sg4r
Feb 16 12:56:36.061: INFO: Got endpoints: latency-svc-7sg4r [2.662228844s]
Feb 16 12:56:36.120: INFO: Created: latency-svc-n7cf5
Feb 16 12:56:36.280: INFO: Got endpoints: latency-svc-n7cf5 [2.831818562s]
Feb 16 12:56:36.442: INFO: Created: latency-svc-ww9p7
Feb 16 12:56:36.481: INFO: Got endpoints: latency-svc-ww9p7 [2.799539016s]
Feb 16 12:56:36.651: INFO: Created: latency-svc-4zxnq
Feb 16 12:56:36.658: INFO: Got endpoints: latency-svc-4zxnq [2.682727169s]
Feb 16 12:56:36.723: INFO: Created: latency-svc-7jhdp
Feb 16 12:56:36.822: INFO: Got endpoints: latency-svc-7jhdp [2.564922028s]
Feb 16 12:56:36.840: INFO: Created: latency-svc-zxk9k
Feb 16 12:56:36.862: INFO: Got endpoints: latency-svc-zxk9k [2.345049379s]
Feb 16 12:56:37.039: INFO: Created: latency-svc-ddx5v
Feb 16 12:56:37.067: INFO: Got endpoints: latency-svc-ddx5v [2.110795863s]
Feb 16 12:56:37.210: INFO: Created: latency-svc-bbqps
Feb 16 12:56:37.253: INFO: Got endpoints: latency-svc-bbqps [2.265834474s]
Feb 16 12:56:37.305: INFO: Created: latency-svc-qp45j
Feb 16 12:56:37.433: INFO: Got endpoints: latency-svc-qp45j [2.383323475s]
Feb 16 12:56:37.456: INFO: Created: latency-svc-l2g56
Feb 16 12:56:37.463: INFO: Got endpoints: latency-svc-l2g56 [2.179479239s]
Feb 16 12:56:37.712: INFO: Created: latency-svc-dgtkp
Feb 16 12:56:37.712: INFO: Got endpoints: latency-svc-dgtkp [2.184439242s]
Feb 16 12:56:37.879: INFO: Created: latency-svc-6msbd
Feb 16 12:56:37.921: INFO: Got endpoints: latency-svc-6msbd [2.279580772s]
Feb 16 12:56:38.040: INFO: Created: latency-svc-4zfc7
Feb 16 12:56:38.069: INFO: Got endpoints: latency-svc-4zfc7 [2.382942055s]
Feb 16 12:56:38.129: INFO: Created: latency-svc-kx74n
Feb 16 12:56:38.273: INFO: Got endpoints: latency-svc-kx74n [2.397651238s]
Feb 16 12:56:38.322: INFO: Created: latency-svc-ssvwn
Feb 16 12:56:38.348: INFO: Got endpoints: latency-svc-ssvwn [2.322540756s]
Feb 16 12:56:38.577: INFO: Created: latency-svc-7kpxh
Feb 16 12:56:38.587: INFO: Got endpoints: latency-svc-7kpxh [2.526764095s]
Feb 16 12:56:38.929: INFO: Created: latency-svc-whmwj
Feb 16 12:56:39.170: INFO: Created: latency-svc-6v6w8
Feb 16 12:56:39.170: INFO: Got endpoints: latency-svc-whmwj [2.889037557s]
Feb 16 12:56:39.195: INFO: Got endpoints: latency-svc-6v6w8 [2.712527327s]
Feb 16 12:56:39.467: INFO: Created: latency-svc-k4fk7
Feb 16 12:56:39.509: INFO: Got endpoints: latency-svc-k4fk7 [2.850035288s]
Feb 16 12:56:39.788: INFO: Created: latency-svc-ffdlq
Feb 16 12:56:39.790: INFO: Got endpoints: latency-svc-ffdlq [2.968273946s]
Feb 16 12:56:40.002: INFO: Created: latency-svc-m5cq8
Feb 16 12:56:40.047: INFO: Got endpoints: latency-svc-m5cq8 [3.184221664s]
Feb 16 12:56:40.302: INFO: Created: latency-svc-7kwwq
Feb 16 12:56:40.366: INFO: Got endpoints: latency-svc-7kwwq [3.298668712s]
Feb 16 12:56:40.561: INFO: Created: latency-svc-6scwh
Feb 16 12:56:40.569: INFO: Got endpoints: latency-svc-6scwh [3.315046058s]
Feb 16 12:56:40.813: INFO: Created: latency-svc-tqnpc
Feb 16 12:56:40.863: INFO: Got endpoints: latency-svc-tqnpc [3.429467391s]
Feb 16 12:56:40.994: INFO: Created: latency-svc-m84cf
Feb 16 12:56:41.185: INFO: Created: latency-svc-xl4nk
Feb 16 12:56:41.254: INFO: Got endpoints: latency-svc-m84cf [3.791517861s]
Feb 16 12:56:41.408: INFO: Got endpoints: latency-svc-xl4nk [3.695397589s]
Feb 16 12:56:41.429: INFO: Created: latency-svc-65pv6
Feb 16 12:56:41.429: INFO: Got endpoints: latency-svc-65pv6 [3.507040748s]
Feb 16 12:56:41.487: INFO: Created: latency-svc-r9vnd
Feb 16 12:56:41.586: INFO: Got endpoints: latency-svc-r9vnd [3.517222413s]
Feb 16 12:56:41.634: INFO: Created: latency-svc-bgvjp
Feb 16 12:56:41.845: INFO: Got endpoints: latency-svc-bgvjp [3.572118224s]
Feb 16 12:56:41.908: INFO: Created: latency-svc-9t59l
Feb 16 12:56:42.050: INFO: Got endpoints: latency-svc-9t59l [3.702133409s]
Feb 16 12:56:42.318: INFO: Created: latency-svc-wzhxb
Feb 16 12:56:42.335: INFO: Got endpoints: latency-svc-wzhxb [3.747805974s]
Feb 16 12:56:42.456: INFO: Created: latency-svc-gj8m2
Feb 16 12:56:42.501: INFO: Got endpoints: latency-svc-gj8m2 [3.331024973s]
Feb 16 12:56:42.710: INFO: Created: latency-svc-dvjkw
Feb 16 12:56:42.728: INFO: Got endpoints: latency-svc-dvjkw [3.53285266s]
Feb 16 12:56:42.868: INFO: Created: latency-svc-r4dls
Feb 16 12:56:42.904: INFO: Got endpoints: latency-svc-r4dls [3.394449061s]
Feb 16 12:56:43.163: INFO: Created: latency-svc-jwjkk
Feb 16 12:56:43.177: INFO: Got endpoints: latency-svc-jwjkk [3.386826776s]
Feb 16 12:56:43.233: INFO: Created: latency-svc-w8cdz
Feb 16 12:56:43.247: INFO: Got endpoints: latency-svc-w8cdz [3.200187421s]
Feb 16 12:56:43.433: INFO: Created: latency-svc-8zjsr
Feb 16 12:56:43.444: INFO: Got endpoints: latency-svc-8zjsr [3.077865277s]
Feb 16 12:56:43.795: INFO: Created: latency-svc-thjx5
Feb 16 12:56:43.795: INFO: Got endpoints: latency-svc-thjx5 [3.226470782s]
Feb 16 12:56:44.051: INFO: Created: latency-svc-kq2qq
Feb 16 12:56:44.068: INFO: Got endpoints: latency-svc-kq2qq [3.204438605s]
Feb 16 12:56:45.001: INFO: Created: latency-svc-zr9tm
Feb 16 12:56:45.018: INFO: Got endpoints: latency-svc-zr9tm [3.763554895s]
Feb 16 12:56:45.583: INFO: Created: latency-svc-64slq
Feb 16 12:56:45.621: INFO: Got endpoints: latency-svc-64slq [4.212085823s]
Feb 16 12:56:46.061: INFO: Created: latency-svc-f7ksf
Feb 16 12:56:46.108: INFO: Got endpoints: latency-svc-f7ksf [4.679443267s]
Feb 16 12:56:47.022: INFO: Created: latency-svc-mrtjz
Feb 16 12:56:47.055: INFO: Got endpoints: latency-svc-mrtjz [5.469020439s]
Feb 16 12:56:47.173: INFO: Created: latency-svc-f8wsp
Feb 16 12:56:47.193: INFO: Got endpoints: latency-svc-f8wsp [5.347628676s]
Feb 16 12:56:47.436: INFO: Created: latency-svc-hw5mp
Feb 16 12:56:47.456: INFO: Got endpoints: latency-svc-hw5mp [5.405566687s]
Feb 16 12:56:47.742: INFO: Created: latency-svc-gsgm6
Feb 16 12:56:47.760: INFO: Got endpoints: latency-svc-gsgm6 [5.424085189s]
Feb 16 12:56:47.962: INFO: Created: latency-svc-frjsk
Feb 16 12:56:47.989: INFO: Got endpoints: latency-svc-frjsk [5.487763452s]
Feb 16 12:56:48.198: INFO: Created: latency-svc-5r45t
Feb 16 12:56:48.212: INFO: Got endpoints: latency-svc-5r45t [5.484290709s]
Feb 16 12:56:48.433: INFO: Created: latency-svc-mbdk8
Feb 16 12:56:48.661: INFO: Got endpoints: latency-svc-mbdk8 [5.75686848s]
Feb 16 12:56:48.713: INFO: Created: latency-svc-phmqt
Feb 16 12:56:48.741: INFO: Got endpoints: latency-svc-phmqt [5.563078557s]
Feb 16 12:56:48.910: INFO: Created: latency-svc-8tz7q
Feb 16 12:56:48.929: INFO: Got endpoints: latency-svc-8tz7q [5.681662709s]
Feb 16 12:56:49.157: INFO: Created: latency-svc-hjnnz
Feb 16 12:56:49.187: INFO: Got endpoints: latency-svc-hjnnz [5.74260089s]
Feb 16 12:56:49.286: INFO: Created: latency-svc-rw2lv
Feb 16 12:56:49.295: INFO: Got endpoints: latency-svc-rw2lv [5.499119734s]
Feb 16 12:56:49.592: INFO: Created: latency-svc-6szjh
Feb 16 12:56:49.610: INFO: Got endpoints: latency-svc-6szjh [5.542774784s]
Feb 16 12:56:49.887: INFO: Created: latency-svc-pvxbc
Feb 16 12:56:49.933: INFO: Got endpoints: latency-svc-pvxbc [4.91465577s]
Feb 16 12:56:50.199: INFO: Created: latency-svc-gcrzw
Feb 16 12:56:50.217: INFO: Got endpoints: latency-svc-gcrzw [4.595964515s]
Feb 16 12:56:50.439: INFO: Created: latency-svc-5vwp2
Feb 16 12:56:50.567: INFO: Got endpoints: latency-svc-5vwp2 [4.458861034s]
Feb 16 12:56:50.608: INFO: Created: latency-svc-f5mjl
Feb 16 12:56:50.913: INFO: Got endpoints: latency-svc-f5mjl [3.857745532s]
Feb 16 12:56:51.189: INFO: Created: latency-svc-ftsxw
Feb 16 12:56:51.424: INFO: Got endpoints: latency-svc-ftsxw [4.230920598s]
Feb 16 12:56:51.500: INFO: Created: latency-svc-tkh9q
Feb 16 12:56:51.772: INFO: Created: latency-svc-8sgv8
Feb 16 12:56:52.000: INFO: Got endpoints: latency-svc-tkh9q [4.543526609s]
Feb 16 12:56:52.007: INFO: Created: latency-svc-5rpb5
Feb 16 12:56:52.047: INFO: Got endpoints: latency-svc-5rpb5 [4.057758659s]
Feb 16 12:56:52.123: INFO: Got endpoints: latency-svc-8sgv8 [4.3633159s]
Feb 16 12:56:52.206: INFO: Created: latency-svc-dv9jz
Feb 16 12:56:52.311: INFO: Got endpoints: latency-svc-dv9jz [4.099060636s]
Feb 16 12:56:52.333: INFO: Created: latency-svc-26rmk
Feb 16 12:56:52.348: INFO: Got endpoints: latency-svc-26rmk [223.819795ms]
Feb 16 12:56:52.384: INFO: Created: latency-svc-dnx9r
Feb 16 12:56:52.397: INFO: Got endpoints: latency-svc-dnx9r [3.735172287s]
Feb 16 12:56:52.525: INFO: Created: latency-svc-gtlvv
Feb 16 12:56:52.548: INFO: Got endpoints: latency-svc-gtlvv [3.806974458s]
Feb 16 12:56:52.764: INFO: Created: latency-svc-j27hw
Feb 16 12:56:52.768: INFO: Got endpoints: latency-svc-j27hw [3.838883087s]
Feb 16 12:56:52.803: INFO: Created: latency-svc-vgdlh
Feb 16 12:56:52.829: INFO: Got endpoints: latency-svc-vgdlh [3.641182694s]
Feb 16 12:56:52.936: INFO: Created: latency-svc-zzl9l
Feb 16 12:56:52.954: INFO: Got endpoints: latency-svc-zzl9l [3.659228022s]
Feb 16 12:56:53.003: INFO: Created: latency-svc-4nlzl
Feb 16 12:56:53.003: INFO: Got endpoints: latency-svc-4nlzl [3.39212187s]
Feb 16 12:56:53.267: INFO: Created: latency-svc-wp6gt
Feb 16 12:56:53.277: INFO: Got endpoints: latency-svc-wp6gt [3.344351044s]
Feb 16 12:56:53.536: INFO: Created: latency-svc-zz2zv
Feb 16 12:56:53.571: INFO: Got endpoints: latency-svc-zz2zv [3.354146139s]
Feb 16 12:56:53.858: INFO: Created: latency-svc-fq642
Feb 16 12:56:54.036: INFO: Got endpoints: latency-svc-fq642 [3.468089247s]
Feb 16 12:56:54.056: INFO: Created: latency-svc-wldl9
Feb 16 12:56:54.101: INFO: Got endpoints: latency-svc-wldl9 [3.186881345s]
Feb 16 12:56:54.252: INFO: Created: latency-svc-xbnnz
Feb 16 12:56:54.317: INFO: Got endpoints: latency-svc-xbnnz [2.892587103s]
Feb 16 12:56:54.496: INFO: Created: latency-svc-8snnc
Feb 16 12:56:54.660: INFO: Got endpoints: latency-svc-8snnc [2.65942986s]
Feb 16 12:56:54.709: INFO: Created: latency-svc-vnxzz
Feb 16 12:56:54.816: INFO: Got endpoints: latency-svc-vnxzz [2.768887619s]
Feb 16 12:56:54.835: INFO: Created: latency-svc-m7w72
Feb 16 12:56:55.101: INFO: Got endpoints: latency-svc-m7w72 [2.789519193s]
Feb 16 12:56:55.173: INFO: Created: latency-svc-sf6c9
Feb 16 12:56:55.192: INFO: Created: latency-svc-45xhz
Feb 16 12:56:55.192: INFO: Got endpoints: latency-svc-sf6c9 [2.844340596s]
Feb 16 12:56:55.295: INFO: Got endpoints: latency-svc-45xhz [2.897971316s]
Feb 16 12:56:55.312: INFO: Created: latency-svc-7rjmv
Feb 16 12:56:55.324: INFO: Got endpoints: latency-svc-7rjmv [2.77622665s]
Feb 16 12:56:55.382: INFO: Created: latency-svc-v4qq4
Feb 16 12:56:55.503: INFO: Got endpoints: latency-svc-v4qq4 [2.734322452s]
Feb 16 12:56:55.536: INFO: Created: latency-svc-z4jld
Feb 16 12:56:55.536: INFO: Got endpoints: latency-svc-z4jld [2.706858215s]
Feb 16 12:56:55.747: INFO: Created: latency-svc-2th6s
Feb 16 12:56:55.755: INFO: Got endpoints: latency-svc-2th6s [2.80093082s]
Feb 16 12:56:55.818: INFO: Created: latency-svc-792vl
Feb 16 12:56:55.818: INFO: Got endpoints: latency-svc-792vl [2.815148389s]
Feb 16 12:56:55.933: INFO: Created: latency-svc-mhh5x
Feb 16 12:56:55.965: INFO: Got endpoints: latency-svc-mhh5x [2.686816207s]
Feb 16 12:56:56.201: INFO: Created: latency-svc-k2bmh
Feb 16 12:56:56.217: INFO: Got endpoints: latency-svc-k2bmh [2.645636343s]
Feb 16 12:56:56.347: INFO: Created: latency-svc-rjln7
Feb 16 12:56:56.380: INFO: Got endpoints: latency-svc-rjln7 [2.34394991s]
Feb 16 12:56:56.419: INFO: Created: latency-svc-7lpws
Feb 16 12:56:56.432: INFO: Got endpoints: latency-svc-7lpws [2.330796292s]
Feb 16 12:56:56.580: INFO: Created: latency-svc-b7xhb
Feb 16 12:56:56.606: INFO: Got endpoints: latency-svc-b7xhb [2.288447129s]
Feb 16 12:56:56.752: INFO: Created: latency-svc-vdv56
Feb 16 12:56:56.760: INFO: Got endpoints: latency-svc-vdv56 [2.098797635s]
Feb 16 12:56:56.809: INFO: Created: latency-svc-dj7rl
Feb 16 12:56:56.826: INFO: Got endpoints: latency-svc-dj7rl [2.009476341s]
Feb 16 12:56:56.950: INFO: Created: latency-svc-x4q6h
Feb 16 12:56:56.971: INFO: Got endpoints: latency-svc-x4q6h [1.869307598s]
Feb 16 12:56:57.144: INFO: Created: latency-svc-hgs6t
Feb 16 12:56:57.152: INFO: Got endpoints: latency-svc-hgs6t [1.960046872s]
Feb 16 12:56:57.184: INFO: Created: latency-svc-cwb57
Feb 16 12:56:57.209: INFO: Got endpoints: latency-svc-cwb57 [1.913621403s]
Feb 16 12:56:57.317: INFO: Created: latency-svc-gbvv5
Feb 16 12:56:57.337: INFO: Got endpoints: latency-svc-gbvv5 [2.012277539s]
Feb 16 12:56:57.385: INFO: Created: latency-svc-lpzwj
Feb 16 12:56:57.491: INFO: Got endpoints: latency-svc-lpzwj [1.988281973s]
Feb 16 12:56:57.529: INFO: Created: latency-svc-wlg9w
Feb 16 12:56:57.564: INFO: Got endpoints: latency-svc-wlg9w [2.02750222s]
Feb 16 12:56:57.574: INFO: Created: latency-svc-vxvvz
Feb 16 12:56:57.689: INFO: Got endpoints: latency-svc-vxvvz [1.933387056s]
Feb 16 12:56:57.733: INFO: Created: latency-svc-hd9ng
Feb 16 12:56:57.740: INFO: Got endpoints: latency-svc-hd9ng [1.921523085s]
Feb 16 12:56:57.784: INFO: Created: latency-svc-7d8dv
Feb 16 12:56:57.805: INFO: Got endpoints: latency-svc-7d8dv [1.839670735s]
Feb 16 12:56:57.904: INFO: Created: latency-svc-mgpq5
Feb 16 12:56:57.911: INFO: Got endpoints: latency-svc-mgpq5 [1.692928323s]
Feb 16 12:56:57.953: INFO: Created: latency-svc-4w4cs
Feb 16 12:56:57.963: INFO: Got endpoints: latency-svc-4w4cs [1.582995903s]
Feb 16 12:56:58.057: INFO: Created: latency-svc-rcdb7
Feb 16 12:56:58.065: INFO: Got endpoints: latency-svc-rcdb7 [1.632752273s]
Feb 16 12:56:58.114: INFO: Created: latency-svc-bb94l
Feb 16 12:56:58.138: INFO: Got endpoints: latency-svc-bb94l [1.531894295s]
Feb 16 12:56:58.264: INFO: Created: latency-svc-8bbf2
Feb 16 12:56:59.546: INFO: Got endpoints: latency-svc-8bbf2 [2.78598156s]
Feb 16 12:56:59.583: INFO: Created: latency-svc-6ncr6
Feb 16 12:56:59.702: INFO: Got endpoints: latency-svc-6ncr6 [2.876035401s]
Feb 16 12:56:59.750: INFO: Created: latency-svc-dh7qn
Feb 16 12:56:59.770: INFO: Got endpoints: latency-svc-dh7qn [2.798287984s]
Feb 16 12:56:59.904: INFO: Created: latency-svc-2r96z
Feb 16 12:56:59.934: INFO: Got endpoints: latency-svc-2r96z [2.781662511s]
Feb 16 12:57:00.118: INFO: Created: latency-svc-z7cm4
Feb 16 12:57:00.126: INFO: Got endpoints: latency-svc-z7cm4 [2.917126244s]
Feb 16 12:57:00.194: INFO: Created: latency-svc-j5qjf
Feb 16 12:57:00.294: INFO: Got endpoints: latency-svc-j5qjf [2.956755335s]
Feb 16 12:57:00.476: INFO: Created: latency-svc-g75jw
Feb 16 12:57:00.508: INFO: Got endpoints: latency-svc-g75jw [3.016413027s]
Feb 16 12:57:00.727: INFO: Created: latency-svc-st6j7
Feb 16 12:57:00.770: INFO: Got endpoints: latency-svc-st6j7 [3.206401265s]
Feb 16 12:57:00.899: INFO: Created: latency-svc-plcbs
Feb 16 12:57:00.907: INFO: Got endpoints: latency-svc-plcbs [3.218449627s]
Feb 16 12:57:00.943: INFO: Created: latency-svc-vg7d8
Feb 16 12:57:00.958: INFO: Got endpoints: latency-svc-vg7d8 [3.218517751s]
Feb 16 12:57:01.075: INFO: Created: latency-svc-pvm7n
Feb 16 12:57:01.076: INFO: Got endpoints: latency-svc-pvm7n [3.270669627s]
Feb 16 12:57:01.273: INFO: Created: latency-svc-8g4g8
Feb 16 12:57:01.296: INFO: Got endpoints: latency-svc-8g4g8 [3.385154975s]
Feb 16 12:57:01.349: INFO: Created: latency-svc-4pjgl
Feb 16 12:57:01.367: INFO: Got endpoints: latency-svc-4pjgl [3.403606592s]
Feb 16 12:57:01.469: INFO: Created: latency-svc-kfw47
Feb 16 12:57:01.485: INFO: Got endpoints: latency-svc-kfw47 [3.419214153s]
Feb 16 12:57:01.535: INFO: Created: latency-svc-qbvh5
Feb 16 12:57:01.552: INFO: Got endpoints: latency-svc-qbvh5 [3.413578868s]
Feb 16 12:57:01.708: INFO: Created: latency-svc-fwgxv
Feb 16 12:57:01.787: INFO: Got endpoints: latency-svc-fwgxv [2.241094363s]
Feb 16 12:57:01.957: INFO: Created: latency-svc-qxt5r
Feb 16 12:57:01.958: INFO: Got endpoints: latency-svc-qxt5r [2.255004475s]
Feb 16 12:57:02.129: INFO: Created: latency-svc-dk55p
Feb 16 12:57:02.291: INFO: Created: latency-svc-fvpnv
Feb 16 12:57:02.293: INFO: Got endpoints: latency-svc-dk55p [2.523364057s]
Feb 16 12:57:02.301: INFO: Got endpoints: latency-svc-fvpnv [2.366113691s]
Feb 16 12:57:02.367: INFO: Created: latency-svc-9j76t
Feb 16 12:57:02.464: INFO: Got endpoints: latency-svc-9j76t [2.337319631s]
Feb 16 12:57:02.534: INFO: Created: latency-svc-rwhm8
Feb 16 12:57:02.706: INFO: Got endpoints: latency-svc-rwhm8 [2.41118293s]
Feb 16 12:57:03.268: INFO: Created: latency-svc-t89b7
Feb 16 12:57:03.290: INFO: Got endpoints: latency-svc-t89b7 [2.781652489s]
Feb 16 12:57:03.361: INFO: Created: latency-svc-9lcc4
Feb 16 12:57:03.417: INFO: Got endpoints: latency-svc-9lcc4 [2.646828268s]
Feb 16 12:57:03.613: INFO: Created: latency-svc-rcbvs
Feb 16 12:57:03.652: INFO: Got endpoints: latency-svc-rcbvs [2.743888357s]
Feb 16 12:57:03.760: INFO: Created: latency-svc-94zsc
Feb 16 12:57:03.823: INFO: Got endpoints: latency-svc-94zsc [2.864792688s]
Feb 16 12:57:03.859: INFO: Created: latency-svc-xmwrz
Feb 16 12:57:03.969: INFO: Got endpoints: latency-svc-xmwrz [2.892785991s]
Feb 16 12:57:04.014: INFO: Created: latency-svc-4z4jb
Feb 16 12:57:04.261: INFO: Created: latency-svc-jtlsd
Feb 16 12:57:04.261: INFO: Got endpoints: latency-svc-4z4jb [2.965353796s]
Feb 16 12:57:04.293: INFO: Got endpoints: latency-svc-jtlsd [2.925077684s]
Feb 16 12:57:04.346: INFO: Created: latency-svc-8j5cn
Feb 16 12:57:04.459: INFO: Got endpoints: latency-svc-8j5cn [2.974566605s]
Feb 16 12:57:04.705: INFO: Created: latency-svc-7cqvx
Feb 16 12:57:04.723: INFO: Got endpoints: latency-svc-7cqvx [3.17105784s]
Feb 16 12:57:04.958: INFO: Created: latency-svc-wxsvm
Feb 16 12:57:04.982: INFO: Got endpoints: latency-svc-wxsvm [3.194389811s]
Feb 16 12:57:05.210: INFO: Created: latency-svc-57jvl
Feb 16 12:57:05.225: INFO: Got endpoints: latency-svc-57jvl [3.26697772s]
Feb 16 12:57:05.535: INFO: Created: latency-svc-bszv8
Feb 16 12:57:05.544: INFO: Got endpoints: latency-svc-bszv8 [3.250099978s]
Feb 16 12:57:05.810: INFO: Created: latency-svc-5s592
Feb 16 12:57:05.836: INFO: Got endpoints: latency-svc-5s592 [3.535479778s]
Feb 16 12:57:05.904: INFO: Created: latency-svc-q7djp
Feb 16 12:57:06.030: INFO: Got endpoints: latency-svc-q7djp [3.565816049s]
Feb 16 12:57:06.055: INFO: Created: latency-svc-rjnx9
Feb 16 12:57:06.076: INFO: Got endpoints: latency-svc-rjnx9 [3.369861099s]
Feb 16 12:57:06.286: INFO: Created: latency-svc-5k927
Feb 16 12:57:06.310: INFO: Got endpoints: latency-svc-5k927 [3.019944556s]
Feb 16 12:57:06.361: INFO: Created: latency-svc-fcb8j
Feb 16 12:57:06.453: INFO: Got endpoints: latency-svc-fcb8j [3.035335372s]
Feb 16 12:57:06.496: INFO: Created: latency-svc-5str4
Feb 16 12:57:06.497: INFO: Got endpoints: latency-svc-5str4 [2.844836401s]
Feb 16 12:57:06.692: INFO: Created: latency-svc-8j6br
Feb 16 12:57:06.727: INFO: Got endpoints: latency-svc-8j6br [2.903583601s]
Feb 16 12:57:06.860: INFO: Created: latency-svc-bjdvp
Feb 16 12:57:06.875: INFO: Got endpoints: latency-svc-bjdvp [2.906272478s]
Feb 16 12:57:06.940: INFO: Created: latency-svc-4k2vm
Feb 16 12:57:07.025: INFO: Got endpoints: latency-svc-4k2vm [2.763746798s]
Feb 16 12:57:07.082: INFO: Created: latency-svc-r8qwz
Feb 16 12:57:07.099: INFO: Got endpoints: latency-svc-r8qwz [2.806136994s]
Feb 16 12:57:07.253: INFO: Created: latency-svc-vcnxh
Feb 16 12:57:07.254: INFO: Got endpoints: latency-svc-vcnxh [2.793649003s]
Feb 16 12:57:07.306: INFO: Created: latency-svc-6qptm
Feb 16 12:57:07.310: INFO: Got endpoints: latency-svc-6qptm [2.586508159s]
Feb 16 12:57:07.450: INFO: Created: latency-svc-2v6cv
Feb 16 12:57:07.486: INFO: Got endpoints: latency-svc-2v6cv [2.503527668s]
Feb 16 12:57:07.521: INFO: Created: latency-svc-ng2xf
Feb 16 12:57:07.586: INFO: Got endpoints: latency-svc-ng2xf [2.360293187s]
Feb 16 12:57:07.612: INFO: Created: latency-svc-kvlll
Feb 16 12:57:07.625: INFO: Got endpoints: latency-svc-kvlll [2.081248305s]
Feb 16 12:57:07.894: INFO: Created: latency-svc-c58ds
Feb 16 12:57:07.905: INFO: Got endpoints: latency-svc-c58ds [2.0689035s]
Feb 16 12:57:07.963: INFO: Created: latency-svc-4w6qk
Feb 16 12:57:08.094: INFO: Got endpoints: latency-svc-4w6qk [2.06373201s]
Feb 16 12:57:08.175: INFO: Created: latency-svc-vnb25
Feb 16 12:57:08.286: INFO: Got endpoints: latency-svc-vnb25 [2.209725856s]
Feb 16 12:57:08.288: INFO: Created: latency-svc-4wx44
Feb 16 12:57:08.316: INFO: Got endpoints: latency-svc-4wx44 [2.006030478s]
Feb 16 12:57:08.471: INFO: Created: latency-svc-5bw4p
Feb 16 12:57:08.538: INFO: Got endpoints: latency-svc-5bw4p [2.084822858s]
Feb 16 12:57:08.776: INFO: Created: latency-svc-9nh6d
Feb 16 12:57:08.777: INFO: Got endpoints: latency-svc-9nh6d [2.279733471s]
Feb 16 12:57:08.825: INFO: Created: latency-svc-bj7xk
Feb 16 12:57:08.833: INFO: Got endpoints: latency-svc-bj7xk [2.105081926s]
Feb 16 12:57:08.979: INFO: Created: latency-svc-zjfnb
Feb 16 12:57:08.997: INFO: Got endpoints: latency-svc-zjfnb [2.121267266s]
Feb 16 12:57:09.073: INFO: Created: latency-svc-nfdg6
Feb 16 12:57:09.177: INFO: Got endpoints: latency-svc-nfdg6 [2.151228227s]
Feb 16 12:57:09.193: INFO: Created: latency-svc-prwt8
Feb 16 12:57:09.213: INFO: Got endpoints: latency-svc-prwt8 [2.113456837s]
Feb 16 12:57:09.406: INFO: Created: latency-svc-k2tzb
Feb 16 12:57:09.409: INFO: Got endpoints: latency-svc-k2tzb [2.155639261s]
Feb 16 12:57:09.484: INFO: Created: latency-svc-b77ql
Feb 16 12:57:09.547: INFO: Got endpoints: latency-svc-b77ql [2.236450614s]
Feb 16 12:57:09.579: INFO: Created: latency-svc-qlnjc
Feb 16 12:57:09.588: INFO: Got endpoints: latency-svc-qlnjc [2.101803666s]
Feb 16 12:57:09.628: INFO: Created: latency-svc-rgp96
Feb 16 12:57:09.799: INFO: Got endpoints: latency-svc-rgp96 [2.213579507s]
Feb 16 12:57:09.831: INFO: Created: latency-svc-bhm7s
Feb 16 12:57:09.848: INFO: Got endpoints: latency-svc-bhm7s [2.223219214s]
Feb 16 12:57:09.911: INFO: Created: latency-svc-qzvq5
Feb 16 12:57:09.986: INFO: Got endpoints: latency-svc-qzvq5 [2.080725972s]
Feb 16 12:57:10.013: INFO: Created: latency-svc-w8kl5
Feb 16 12:57:10.030: INFO: Got endpoints: latency-svc-w8kl5 [1.935069766s]
Feb 16 12:57:10.087: INFO: Created: latency-svc-q9t29
Feb 16 12:57:10.215: INFO: Got endpoints: latency-svc-q9t29 [1.928939165s]
Feb 16 12:57:10.268: INFO: Created: latency-svc-cddh2
Feb 16 12:57:10.269: INFO: Got endpoints: latency-svc-cddh2 [1.951915303s]
Feb 16 12:57:10.517: INFO: Created: latency-svc-hpmm4
Feb 16 12:57:10.728: INFO: Got endpoints: latency-svc-hpmm4 [2.189193975s]
Feb 16 12:57:10.734: INFO: Created: latency-svc-bs9v6
Feb 16 12:57:10.758: INFO: Got endpoints: latency-svc-bs9v6 [1.981851285s]
Feb 16 12:57:10.920: INFO: Created: latency-svc-phztw
Feb 16 12:57:10.928: INFO: Got endpoints: latency-svc-phztw [2.095257992s]
Feb 16 12:57:10.968: INFO: Created: latency-svc-tlfn2
Feb 16 12:57:10.992: INFO: Got endpoints: latency-svc-tlfn2 [1.994809572s]
Feb 16 12:57:11.101: INFO: Created: latency-svc-zwpwq
Feb 16 12:57:11.124: INFO: Got endpoints: latency-svc-zwpwq [1.94617314s]
Feb 16 12:57:11.124: INFO: Latencies: [223.819795ms 290.449712ms 609.708465ms 854.284272ms 917.380065ms 1.141231633s 1.423838839s 1.531894295s 1.582995903s 1.632752273s 1.684830439s 1.692928323s 1.839670735s 1.869307598s 1.913621403s 1.921523085s 1.927737094s 1.928939165s 1.933387056s 1.935069766s 1.94617314s 1.951915303s 1.960046872s 1.981851285s 1.988281973s 1.994809572s 2.006030478s 2.009476341s 2.012277539s 2.02750222s 2.06373201s 2.0689035s 2.080725972s 2.081248305s 2.084822858s 2.085802824s 2.095257992s 2.098797635s 2.101803666s 2.105081926s 2.110795863s 2.113456837s 2.121267266s 2.13712824s 2.151228227s 2.155639261s 2.179479239s 2.184439242s 2.189193975s 2.209725856s 2.213579507s 2.223219214s 2.236450614s 2.241094363s 2.255004475s 2.265834474s 2.279580772s 2.279733471s 2.288447129s 2.322540756s 2.330796292s 2.337319631s 2.34394991s 2.345049379s 2.360293187s 2.366113691s 2.369016088s 2.382942055s 2.383323475s 2.397651238s 2.41118293s 2.503527668s 2.523364057s 2.526764095s 2.564922028s 2.586508159s 2.645636343s 2.646828268s 2.65942986s 2.662228844s 2.663515999s 2.682727169s 2.686816207s 2.706858215s 2.712527327s 2.734322452s 2.743888357s 2.763746798s 2.768887619s 2.77622665s 2.781652489s 2.781662511s 2.785131719s 2.78598156s 2.789519193s 2.793649003s 2.798287984s 2.799539016s 2.80093082s 2.806136994s 2.815148389s 2.831818562s 2.844340596s 2.844836401s 2.850035288s 2.864792688s 2.876035401s 2.877373175s 2.889037557s 2.892587103s 2.892785991s 2.897971316s 2.903583601s 2.906272478s 2.917126244s 2.925077684s 2.944197638s 2.949657985s 2.956755335s 2.965353796s 2.968273946s 2.974566605s 3.016413027s 3.019944556s 3.035335372s 3.077865277s 3.117132998s 3.1280376s 3.17105784s 3.184221664s 3.186881345s 3.187124438s 3.194389811s 3.200187421s 3.204438605s 3.204949211s 3.206401265s 3.218449627s 3.218517751s 3.226470782s 3.250099978s 3.26697772s 3.270669627s 3.298369055s 3.298668712s 3.315046058s 3.331024973s 3.344351044s 3.354146139s 3.369861099s 3.385154975s 3.386131464s 3.386826776s 3.39212187s 3.394449061s 3.403606592s 3.413578868s 3.419214153s 3.429467391s 3.468089247s 3.507040748s 3.517222413s 3.53285266s 3.535479778s 3.565816049s 3.572118224s 3.641182694s 3.64412821s 3.659228022s 3.695397589s 3.702133409s 3.735172287s 3.747805974s 3.763554895s 3.791517861s 3.806974458s 3.838883087s 3.857745532s 4.057758659s 4.099060636s 4.212085823s 4.230920598s 4.3633159s 4.458861034s 4.543526609s 4.595964515s 4.679443267s 4.91465577s 5.347628676s 5.405566687s 5.424085189s 5.469020439s 5.484290709s 5.487763452s 5.499119734s 5.542774784s 5.563078557s 5.681662709s 5.74260089s 5.75686848s]
Feb 16 12:57:11.125: INFO: 50 %ile: 2.815148389s
Feb 16 12:57:11.125: INFO: 90 %ile: 4.212085823s
Feb 16 12:57:11.125: INFO: 99 %ile: 5.74260089s
Feb 16 12:57:11.125: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:57:11.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-kq82p" for this suite.
Feb 16 12:58:17.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:58:18.201: INFO: namespace: e2e-tests-svc-latency-kq82p, resource: bindings, ignored listing per whitelist
Feb 16 12:58:18.271: INFO: namespace e2e-tests-svc-latency-kq82p deletion completed in 1m7.136929656s

• [SLOW TEST:120.339 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:58:18.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xb695
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 16 12:58:18.507: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 16 12:59:08.958: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-xb695 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 12:59:08.959: INFO: >>> kubeConfig: /root/.kube/config
I0216 12:59:09.053449       9 log.go:172] (0xc000512630) (0xc000f32e60) Create stream
I0216 12:59:09.053625       9 log.go:172] (0xc000512630) (0xc000f32e60) Stream added, broadcasting: 1
I0216 12:59:09.062233       9 log.go:172] (0xc000512630) Reply frame received for 1
I0216 12:59:09.062360       9 log.go:172] (0xc000512630) (0xc002158460) Create stream
I0216 12:59:09.062380       9 log.go:172] (0xc000512630) (0xc002158460) Stream added, broadcasting: 3
I0216 12:59:09.063853       9 log.go:172] (0xc000512630) Reply frame received for 3
I0216 12:59:09.063895       9 log.go:172] (0xc000512630) (0xc0009983c0) Create stream
I0216 12:59:09.063909       9 log.go:172] (0xc000512630) (0xc0009983c0) Stream added, broadcasting: 5
I0216 12:59:09.065473       9 log.go:172] (0xc000512630) Reply frame received for 5
I0216 12:59:09.308229       9 log.go:172] (0xc000512630) Data frame received for 3
I0216 12:59:09.308402       9 log.go:172] (0xc002158460) (3) Data frame handling
I0216 12:59:09.308474       9 log.go:172] (0xc002158460) (3) Data frame sent
I0216 12:59:09.436716       9 log.go:172] (0xc000512630) Data frame received for 1
I0216 12:59:09.436837       9 log.go:172] (0xc000f32e60) (1) Data frame handling
I0216 12:59:09.436858       9 log.go:172] (0xc000f32e60) (1) Data frame sent
I0216 12:59:09.437728       9 log.go:172] (0xc000512630) (0xc002158460) Stream removed, broadcasting: 3
I0216 12:59:09.437809       9 log.go:172] (0xc000512630) (0xc000f32e60) Stream removed, broadcasting: 1
I0216 12:59:09.440117       9 log.go:172] (0xc000512630) (0xc0009983c0) Stream removed, broadcasting: 5
I0216 12:59:09.440191       9 log.go:172] (0xc000512630) (0xc000f32e60) Stream removed, broadcasting: 1
I0216 12:59:09.440202       9 log.go:172] (0xc000512630) (0xc002158460) Stream removed, broadcasting: 3
I0216 12:59:09.440214       9 log.go:172] (0xc000512630) (0xc0009983c0) Stream removed, broadcasting: 5
Feb 16 12:59:09.440: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:59:09.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-xb695" for this suite.
Feb 16 12:59:33.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:59:33.716: INFO: namespace: e2e-tests-pod-network-test-xb695, resource: bindings, ignored listing per whitelist
Feb 16 12:59:33.725: INFO: namespace e2e-tests-pod-network-test-xb695 deletion completed in 24.268800121s

• [SLOW TEST:75.454 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:59:33.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 16 12:59:34.111: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2dcf9159-50bc-11ea-aa00-0242ac110008" in namespace "e2e-tests-downward-api-jdlfh" to be "success or failure"
Feb 16 12:59:34.138: INFO: Pod "downwardapi-volume-2dcf9159-50bc-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 26.177084ms
Feb 16 12:59:36.878: INFO: Pod "downwardapi-volume-2dcf9159-50bc-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.766634615s
Feb 16 12:59:38.895: INFO: Pod "downwardapi-volume-2dcf9159-50bc-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.78342166s
Feb 16 12:59:40.917: INFO: Pod "downwardapi-volume-2dcf9159-50bc-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.805765328s
Feb 16 12:59:42.936: INFO: Pod "downwardapi-volume-2dcf9159-50bc-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.824309138s
Feb 16 12:59:44.953: INFO: Pod "downwardapi-volume-2dcf9159-50bc-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.842021559s
Feb 16 12:59:47.089: INFO: Pod "downwardapi-volume-2dcf9159-50bc-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.977554319s
Feb 16 12:59:49.226: INFO: Pod "downwardapi-volume-2dcf9159-50bc-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.114635218s
STEP: Saw pod success
Feb 16 12:59:49.226: INFO: Pod "downwardapi-volume-2dcf9159-50bc-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 12:59:49.243: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2dcf9159-50bc-11ea-aa00-0242ac110008 container client-container: 
STEP: delete the pod
Feb 16 12:59:49.418: INFO: Waiting for pod downwardapi-volume-2dcf9159-50bc-11ea-aa00-0242ac110008 to disappear
Feb 16 12:59:49.458: INFO: Pod downwardapi-volume-2dcf9159-50bc-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 12:59:49.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jdlfh" for this suite.
Feb 16 12:59:55.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 12:59:55.618: INFO: namespace: e2e-tests-downward-api-jdlfh, resource: bindings, ignored listing per whitelist
Feb 16 12:59:55.746: INFO: namespace e2e-tests-downward-api-jdlfh deletion completed in 6.2810765s

• [SLOW TEST:22.020 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 12:59:55.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:00:55.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-25f4d" for this suite.
Feb 16 13:01:03.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:01:03.489: INFO: namespace: e2e-tests-container-runtime-25f4d, resource: bindings, ignored listing per whitelist
Feb 16 13:01:03.598: INFO: namespace e2e-tests-container-runtime-25f4d deletion completed in 8.259494055s

• [SLOW TEST:67.852 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:01:03.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-p7p4d
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Feb 16 13:01:04.383: INFO: Found 0 stateful pods, waiting for 3
Feb 16 13:01:14.397: INFO: Found 1 stateful pods, waiting for 3
Feb 16 13:01:24.406: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 13:01:24.406: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 13:01:24.406: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 16 13:01:34.416: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 13:01:34.416: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 13:01:34.416: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 16 13:01:34.554: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 16 13:01:44.654: INFO: Updating stateful set ss2
Feb 16 13:01:44.677: INFO: Waiting for Pod e2e-tests-statefulset-p7p4d/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb 16 13:01:55.180: INFO: Found 2 stateful pods, waiting for 3
Feb 16 13:02:05.283: INFO: Found 2 stateful pods, waiting for 3
Feb 16 13:02:15.953: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 13:02:15.953: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 13:02:15.953: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 16 13:02:25.201: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 13:02:25.201: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 16 13:02:25.201: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 16 13:02:25.249: INFO: Updating stateful set ss2
Feb 16 13:02:25.266: INFO: Waiting for Pod e2e-tests-statefulset-p7p4d/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 13:02:35.333: INFO: Waiting for Pod e2e-tests-statefulset-p7p4d/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 13:02:45.305: INFO: Updating stateful set ss2
Feb 16 13:02:45.355: INFO: Waiting for StatefulSet e2e-tests-statefulset-p7p4d/ss2 to complete update
Feb 16 13:02:45.355: INFO: Waiting for Pod e2e-tests-statefulset-p7p4d/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 13:02:55.667: INFO: Waiting for StatefulSet e2e-tests-statefulset-p7p4d/ss2 to complete update
Feb 16 13:02:55.667: INFO: Waiting for Pod e2e-tests-statefulset-p7p4d/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 16 13:03:05.421: INFO: Waiting for StatefulSet e2e-tests-statefulset-p7p4d/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 16 13:03:15.380: INFO: Deleting all statefulset in ns e2e-tests-statefulset-p7p4d
Feb 16 13:03:15.386: INFO: Scaling statefulset ss2 to 0
Feb 16 13:03:45.442: INFO: Waiting for statefulset status.replicas updated to 0
Feb 16 13:03:45.446: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:03:45.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-p7p4d" for this suite.
Feb 16 13:03:55.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:03:55.735: INFO: namespace: e2e-tests-statefulset-p7p4d, resource: bindings, ignored listing per whitelist
Feb 16 13:03:55.908: INFO: namespace e2e-tests-statefulset-p7p4d deletion completed in 10.32365157s

• [SLOW TEST:172.310 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:03:55.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 16 13:03:56.402: INFO: Waiting up to 5m0s for pod "pod-ca0d4a65-50bc-11ea-aa00-0242ac110008" in namespace "e2e-tests-emptydir-hrlwt" to be "success or failure"
Feb 16 13:03:56.427: INFO: Pod "pod-ca0d4a65-50bc-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 25.546954ms
Feb 16 13:03:58.451: INFO: Pod "pod-ca0d4a65-50bc-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049191783s
Feb 16 13:04:00.472: INFO: Pod "pod-ca0d4a65-50bc-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069760562s
Feb 16 13:04:03.006: INFO: Pod "pod-ca0d4a65-50bc-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.603704154s
Feb 16 13:04:05.069: INFO: Pod "pod-ca0d4a65-50bc-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.666955989s
Feb 16 13:04:07.083: INFO: Pod "pod-ca0d4a65-50bc-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.681452343s
Feb 16 13:04:09.490: INFO: Pod "pod-ca0d4a65-50bc-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.087752671s
STEP: Saw pod success
Feb 16 13:04:09.490: INFO: Pod "pod-ca0d4a65-50bc-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 13:04:09.514: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ca0d4a65-50bc-11ea-aa00-0242ac110008 container test-container: 
STEP: delete the pod
Feb 16 13:04:09.906: INFO: Waiting for pod pod-ca0d4a65-50bc-11ea-aa00-0242ac110008 to disappear
Feb 16 13:04:09.916: INFO: Pod pod-ca0d4a65-50bc-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:04:09.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hrlwt" for this suite.
Feb 16 13:04:15.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:04:16.003: INFO: namespace: e2e-tests-emptydir-hrlwt, resource: bindings, ignored listing per whitelist
Feb 16 13:04:16.106: INFO: namespace e2e-tests-emptydir-hrlwt deletion completed in 6.184416061s

• [SLOW TEST:20.197 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:04:16.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 16 13:04:16.245: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:04:17.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-zwt47" for this suite.
Feb 16 13:04:23.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:04:23.565: INFO: namespace: e2e-tests-custom-resource-definition-zwt47, resource: bindings, ignored listing per whitelist
Feb 16 13:04:24.070: INFO: namespace e2e-tests-custom-resource-definition-zwt47 deletion completed in 6.60483968s

• [SLOW TEST:7.963 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:04:24.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb 16 13:04:39.174: INFO: Successfully updated pod "labelsupdatedad4dc25-50bc-11ea-aa00-0242ac110008"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:04:41.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-lvq6c" for this suite.
Feb 16 13:05:05.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:05:05.386: INFO: namespace: e2e-tests-downward-api-lvq6c, resource: bindings, ignored listing per whitelist
Feb 16 13:05:05.483: INFO: namespace e2e-tests-downward-api-lvq6c deletion completed in 24.181875734s

• [SLOW TEST:41.412 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:05:05.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 16 13:05:05.662: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f36f2658-50bc-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-69l4d" to be "success or failure"
Feb 16 13:05:05.677: INFO: Pod "downwardapi-volume-f36f2658-50bc-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.563185ms
Feb 16 13:05:07.867: INFO: Pod "downwardapi-volume-f36f2658-50bc-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205667784s
Feb 16 13:05:09.884: INFO: Pod "downwardapi-volume-f36f2658-50bc-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22278019s
Feb 16 13:05:13.118: INFO: Pod "downwardapi-volume-f36f2658-50bc-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.456313055s
Feb 16 13:05:15.181: INFO: Pod "downwardapi-volume-f36f2658-50bc-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.518911501s
Feb 16 13:05:17.245: INFO: Pod "downwardapi-volume-f36f2658-50bc-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.583031433s
Feb 16 13:05:19.263: INFO: Pod "downwardapi-volume-f36f2658-50bc-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.601771013s
STEP: Saw pod success
Feb 16 13:05:19.264: INFO: Pod "downwardapi-volume-f36f2658-50bc-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 13:05:19.270: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f36f2658-50bc-11ea-aa00-0242ac110008 container client-container: 
STEP: delete the pod
Feb 16 13:05:19.575: INFO: Waiting for pod downwardapi-volume-f36f2658-50bc-11ea-aa00-0242ac110008 to disappear
Feb 16 13:05:19.605: INFO: Pod downwardapi-volume-f36f2658-50bc-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:05:19.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-69l4d" for this suite.
Feb 16 13:05:28.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:05:29.119: INFO: namespace: e2e-tests-projected-69l4d, resource: bindings, ignored listing per whitelist
Feb 16 13:05:29.143: INFO: namespace e2e-tests-projected-69l4d deletion completed in 9.525496986s

• [SLOW TEST:23.660 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:05:29.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 16 13:05:29.746: INFO: Creating ReplicaSet my-hostname-basic-01ccd6d0-50bd-11ea-aa00-0242ac110008
Feb 16 13:05:29.840: INFO: Pod name my-hostname-basic-01ccd6d0-50bd-11ea-aa00-0242ac110008: Found 0 pods out of 1
Feb 16 13:05:35.210: INFO: Pod name my-hostname-basic-01ccd6d0-50bd-11ea-aa00-0242ac110008: Found 1 pods out of 1
Feb 16 13:05:35.211: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-01ccd6d0-50bd-11ea-aa00-0242ac110008" is running
Feb 16 13:05:47.266: INFO: Pod "my-hostname-basic-01ccd6d0-50bd-11ea-aa00-0242ac110008-h454l" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-16 13:05:30 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-16 13:05:30 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-01ccd6d0-50bd-11ea-aa00-0242ac110008]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-16 13:05:30 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-01ccd6d0-50bd-11ea-aa00-0242ac110008]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-16 13:05:29 +0000 UTC Reason: Message:}])
Feb 16 13:05:47.267: INFO: Trying to dial the pod
Feb 16 13:05:52.320: INFO: Controller my-hostname-basic-01ccd6d0-50bd-11ea-aa00-0242ac110008: Got expected result from replica 1 [my-hostname-basic-01ccd6d0-50bd-11ea-aa00-0242ac110008-h454l]: "my-hostname-basic-01ccd6d0-50bd-11ea-aa00-0242ac110008-h454l", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:05:52.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-d5cr2" for this suite.
Feb 16 13:05:58.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:05:58.670: INFO: namespace: e2e-tests-replicaset-d5cr2, resource: bindings, ignored listing per whitelist
Feb 16 13:05:58.694: INFO: namespace e2e-tests-replicaset-d5cr2 deletion completed in 6.366050264s

• [SLOW TEST:29.549 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:05:58.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Feb 16 13:05:59.138: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-r4tt2" to be "success or failure"
Feb 16 13:05:59.165: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 25.949479ms
Feb 16 13:06:01.505: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.365942452s
Feb 16 13:06:03.513: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.374321453s
Feb 16 13:06:05.532: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.393544675s
Feb 16 13:06:07.840: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.701109297s
Feb 16 13:06:09.893: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.754544832s
Feb 16 13:06:11.921: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.782302465s
Feb 16 13:06:13.932: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.793551185s
Feb 16 13:06:15.945: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 16.806449871s
Feb 16 13:06:18.100: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 18.961510684s
Feb 16 13:06:20.175: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 21.036078122s
Feb 16 13:06:23.396: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.257419598s
STEP: Saw pod success
Feb 16 13:06:23.396: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 16 13:06:23.474: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 16 13:06:24.835: INFO: Waiting for pod pod-host-path-test to disappear
Feb 16 13:06:24.861: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:06:24.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-r4tt2" for this suite.
Feb 16 13:06:30.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:06:30.997: INFO: namespace: e2e-tests-hostpath-r4tt2, resource: bindings, ignored listing per whitelist
Feb 16 13:06:31.040: INFO: namespace e2e-tests-hostpath-r4tt2 deletion completed in 6.166805612s

• [SLOW TEST:32.345 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:06:31.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-266fc094-50bd-11ea-aa00-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 16 13:06:31.237: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2670fca3-50bd-11ea-aa00-0242ac110008" in namespace "e2e-tests-projected-zmnrx" to be "success or failure"
Feb 16 13:06:31.384: INFO: Pod "pod-projected-configmaps-2670fca3-50bd-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 146.867113ms
Feb 16 13:06:34.310: INFO: Pod "pod-projected-configmaps-2670fca3-50bd-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.072682738s
Feb 16 13:06:36.376: INFO: Pod "pod-projected-configmaps-2670fca3-50bd-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.139045039s
Feb 16 13:06:38.420: INFO: Pod "pod-projected-configmaps-2670fca3-50bd-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.182592653s
Feb 16 13:06:40.439: INFO: Pod "pod-projected-configmaps-2670fca3-50bd-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.201510725s
Feb 16 13:06:42.478: INFO: Pod "pod-projected-configmaps-2670fca3-50bd-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.241245877s
Feb 16 13:06:44.504: INFO: Pod "pod-projected-configmaps-2670fca3-50bd-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.266443066s
STEP: Saw pod success
Feb 16 13:06:44.504: INFO: Pod "pod-projected-configmaps-2670fca3-50bd-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 13:06:44.519: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-2670fca3-50bd-11ea-aa00-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 16 13:06:44.959: INFO: Waiting for pod pod-projected-configmaps-2670fca3-50bd-11ea-aa00-0242ac110008 to disappear
Feb 16 13:06:44.974: INFO: Pod pod-projected-configmaps-2670fca3-50bd-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:06:44.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zmnrx" for this suite.
Feb 16 13:06:51.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:06:51.198: INFO: namespace: e2e-tests-projected-zmnrx, resource: bindings, ignored listing per whitelist
Feb 16 13:06:51.299: INFO: namespace e2e-tests-projected-zmnrx deletion completed in 6.313233599s

• [SLOW TEST:20.259 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:06:51.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 16 13:06:51.492: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3282fe47-50bd-11ea-aa00-0242ac110008" in namespace "e2e-tests-downward-api-z49f2" to be "success or failure"
Feb 16 13:06:51.512: INFO: Pod "downwardapi-volume-3282fe47-50bd-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.523952ms
Feb 16 13:06:53.534: INFO: Pod "downwardapi-volume-3282fe47-50bd-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041410662s
Feb 16 13:06:55.545: INFO: Pod "downwardapi-volume-3282fe47-50bd-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052331199s
Feb 16 13:06:58.029: INFO: Pod "downwardapi-volume-3282fe47-50bd-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.536297914s
Feb 16 13:07:00.969: INFO: Pod "downwardapi-volume-3282fe47-50bd-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.476156927s
Feb 16 13:07:03.012: INFO: Pod "downwardapi-volume-3282fe47-50bd-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.519576858s
STEP: Saw pod success
Feb 16 13:07:03.012: INFO: Pod "downwardapi-volume-3282fe47-50bd-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 13:07:03.081: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3282fe47-50bd-11ea-aa00-0242ac110008 container client-container: 
STEP: delete the pod
Feb 16 13:07:05.033: INFO: Waiting for pod downwardapi-volume-3282fe47-50bd-11ea-aa00-0242ac110008 to disappear
Feb 16 13:07:05.195: INFO: Pod downwardapi-volume-3282fe47-50bd-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:07:05.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-z49f2" for this suite.
Feb 16 13:07:11.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:07:11.532: INFO: namespace: e2e-tests-downward-api-z49f2, resource: bindings, ignored listing per whitelist
Feb 16 13:07:11.628: INFO: namespace e2e-tests-downward-api-z49f2 deletion completed in 6.41979569s

• [SLOW TEST:20.329 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:07:11.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 16 13:07:12.156: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fdlxt,SelfLink:/api/v1/namespaces/e2e-tests-watch-fdlxt/configmaps/e2e-watch-test-configmap-a,UID:3ed4cbc5-50bd-11ea-a994-fa163e34d433,ResourceVersion:21873188,Generation:0,CreationTimestamp:2020-02-16 13:07:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 16 13:07:12.157: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fdlxt,SelfLink:/api/v1/namespaces/e2e-tests-watch-fdlxt/configmaps/e2e-watch-test-configmap-a,UID:3ed4cbc5-50bd-11ea-a994-fa163e34d433,ResourceVersion:21873188,Generation:0,CreationTimestamp:2020-02-16 13:07:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 16 13:07:22.198: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fdlxt,SelfLink:/api/v1/namespaces/e2e-tests-watch-fdlxt/configmaps/e2e-watch-test-configmap-a,UID:3ed4cbc5-50bd-11ea-a994-fa163e34d433,ResourceVersion:21873201,Generation:0,CreationTimestamp:2020-02-16 13:07:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 16 13:07:22.198: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fdlxt,SelfLink:/api/v1/namespaces/e2e-tests-watch-fdlxt/configmaps/e2e-watch-test-configmap-a,UID:3ed4cbc5-50bd-11ea-a994-fa163e34d433,ResourceVersion:21873201,Generation:0,CreationTimestamp:2020-02-16 13:07:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 16 13:07:32.339: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fdlxt,SelfLink:/api/v1/namespaces/e2e-tests-watch-fdlxt/configmaps/e2e-watch-test-configmap-a,UID:3ed4cbc5-50bd-11ea-a994-fa163e34d433,ResourceVersion:21873213,Generation:0,CreationTimestamp:2020-02-16 13:07:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 16 13:07:32.339: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fdlxt,SelfLink:/api/v1/namespaces/e2e-tests-watch-fdlxt/configmaps/e2e-watch-test-configmap-a,UID:3ed4cbc5-50bd-11ea-a994-fa163e34d433,ResourceVersion:21873213,Generation:0,CreationTimestamp:2020-02-16 13:07:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 16 13:07:42.480: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fdlxt,SelfLink:/api/v1/namespaces/e2e-tests-watch-fdlxt/configmaps/e2e-watch-test-configmap-a,UID:3ed4cbc5-50bd-11ea-a994-fa163e34d433,ResourceVersion:21873226,Generation:0,CreationTimestamp:2020-02-16 13:07:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 16 13:07:42.481: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-fdlxt,SelfLink:/api/v1/namespaces/e2e-tests-watch-fdlxt/configmaps/e2e-watch-test-configmap-a,UID:3ed4cbc5-50bd-11ea-a994-fa163e34d433,ResourceVersion:21873226,Generation:0,CreationTimestamp:2020-02-16 13:07:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 16 13:07:52.524: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-fdlxt,SelfLink:/api/v1/namespaces/e2e-tests-watch-fdlxt/configmaps/e2e-watch-test-configmap-b,UID:56e21e4b-50bd-11ea-a994-fa163e34d433,ResourceVersion:21873239,Generation:0,CreationTimestamp:2020-02-16 13:07:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 16 13:07:52.524: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-fdlxt,SelfLink:/api/v1/namespaces/e2e-tests-watch-fdlxt/configmaps/e2e-watch-test-configmap-b,UID:56e21e4b-50bd-11ea-a994-fa163e34d433,ResourceVersion:21873239,Generation:0,CreationTimestamp:2020-02-16 13:07:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 16 13:08:02.562: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-fdlxt,SelfLink:/api/v1/namespaces/e2e-tests-watch-fdlxt/configmaps/e2e-watch-test-configmap-b,UID:56e21e4b-50bd-11ea-a994-fa163e34d433,ResourceVersion:21873252,Generation:0,CreationTimestamp:2020-02-16 13:07:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 16 13:08:02.563: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-fdlxt,SelfLink:/api/v1/namespaces/e2e-tests-watch-fdlxt/configmaps/e2e-watch-test-configmap-b,UID:56e21e4b-50bd-11ea-a994-fa163e34d433,ResourceVersion:21873252,Generation:0,CreationTimestamp:2020-02-16 13:07:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:08:12.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-fdlxt" for this suite.
Feb 16 13:08:18.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:08:18.962: INFO: namespace: e2e-tests-watch-fdlxt, resource: bindings, ignored listing per whitelist
Feb 16 13:08:19.172: INFO: namespace e2e-tests-watch-fdlxt deletion completed in 6.554890188s

• [SLOW TEST:67.544 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:08:19.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb 16 13:08:19.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fc7s2'
Feb 16 13:08:23.058: INFO: stderr: ""
Feb 16 13:08:23.058: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 16 13:08:25.666: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:08:25.667: INFO: Found 0 / 1
Feb 16 13:08:26.125: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:08:26.126: INFO: Found 0 / 1
Feb 16 13:08:28.343: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:08:28.343: INFO: Found 0 / 1
Feb 16 13:08:29.380: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:08:29.380: INFO: Found 0 / 1
Feb 16 13:08:30.071: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:08:30.071: INFO: Found 0 / 1
Feb 16 13:08:31.082: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:08:31.082: INFO: Found 0 / 1
Feb 16 13:08:32.077: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:08:32.077: INFO: Found 0 / 1
Feb 16 13:08:33.087: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:08:33.087: INFO: Found 0 / 1
Feb 16 13:08:34.084: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:08:34.086: INFO: Found 0 / 1
Feb 16 13:08:35.831: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:08:35.832: INFO: Found 0 / 1
Feb 16 13:08:36.323: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:08:36.324: INFO: Found 0 / 1
Feb 16 13:08:37.116: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:08:37.116: INFO: Found 0 / 1
Feb 16 13:08:38.082: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:08:38.082: INFO: Found 0 / 1
Feb 16 13:08:39.080: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:08:39.080: INFO: Found 0 / 1
Feb 16 13:08:40.077: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:08:40.077: INFO: Found 1 / 1
Feb 16 13:08:40.077: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 16 13:08:40.083: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:08:40.083: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 16 13:08:40.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-t7hzg --namespace=e2e-tests-kubectl-fc7s2 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 16 13:08:40.221: INFO: stderr: ""
Feb 16 13:08:40.222: INFO: stdout: "pod/redis-master-t7hzg patched\n"
STEP: checking annotations
Feb 16 13:08:40.233: INFO: Selector matched 1 pods for map[app:redis]
Feb 16 13:08:40.233: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:08:40.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fc7s2" for this suite.
Feb 16 13:09:20.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:09:20.667: INFO: namespace: e2e-tests-kubectl-fc7s2, resource: bindings, ignored listing per whitelist
Feb 16 13:09:20.674: INFO: namespace e2e-tests-kubectl-fc7s2 deletion completed in 40.435150494s

• [SLOW TEST:61.501 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:09:20.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-8ba7a673-50bd-11ea-aa00-0242ac110008
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-8ba7a673-50bd-11ea-aa00-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:09:35.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-p8tmw" for this suite.
Feb 16 13:10:05.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:10:05.328: INFO: namespace: e2e-tests-projected-p8tmw, resource: bindings, ignored listing per whitelist
Feb 16 13:10:05.455: INFO: namespace e2e-tests-projected-p8tmw deletion completed in 30.22213981s

• [SLOW TEST:44.780 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:10:05.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Feb 16 13:10:05.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-thct6'
Feb 16 13:10:06.022: INFO: stderr: ""
Feb 16 13:10:06.022: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 16 13:10:06.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-thct6'
Feb 16 13:10:06.189: INFO: stderr: ""
Feb 16 13:10:06.189: INFO: stdout: "update-demo-nautilus-d2gzk "
STEP: Replicas for name=update-demo: expected=2 actual=1
Feb 16 13:10:11.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-thct6'
Feb 16 13:10:11.346: INFO: stderr: ""
Feb 16 13:10:11.346: INFO: stdout: "update-demo-nautilus-d2gzk update-demo-nautilus-sm46x "
Feb 16 13:10:11.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d2gzk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-thct6'
Feb 16 13:10:11.451: INFO: stderr: ""
Feb 16 13:10:11.452: INFO: stdout: ""
Feb 16 13:10:11.452: INFO: update-demo-nautilus-d2gzk is created but not running
Feb 16 13:10:16.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-thct6'
Feb 16 13:10:16.650: INFO: stderr: ""
Feb 16 13:10:16.650: INFO: stdout: "update-demo-nautilus-d2gzk update-demo-nautilus-sm46x "
Feb 16 13:10:16.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d2gzk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-thct6'
Feb 16 13:10:16.806: INFO: stderr: ""
Feb 16 13:10:16.806: INFO: stdout: ""
Feb 16 13:10:16.806: INFO: update-demo-nautilus-d2gzk is created but not running
Feb 16 13:10:21.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-thct6'
Feb 16 13:10:21.983: INFO: stderr: ""
Feb 16 13:10:21.983: INFO: stdout: "update-demo-nautilus-d2gzk update-demo-nautilus-sm46x "
Feb 16 13:10:21.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d2gzk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-thct6'
Feb 16 13:10:22.111: INFO: stderr: ""
Feb 16 13:10:22.111: INFO: stdout: "true"
Feb 16 13:10:22.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d2gzk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-thct6'
Feb 16 13:10:22.217: INFO: stderr: ""
Feb 16 13:10:22.217: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 16 13:10:22.217: INFO: validating pod update-demo-nautilus-d2gzk
Feb 16 13:10:22.251: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 16 13:10:22.251: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 16 13:10:22.251: INFO: update-demo-nautilus-d2gzk is verified up and running
Feb 16 13:10:22.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sm46x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-thct6'
Feb 16 13:10:22.366: INFO: stderr: ""
Feb 16 13:10:22.366: INFO: stdout: "true"
Feb 16 13:10:22.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sm46x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-thct6'
Feb 16 13:10:22.463: INFO: stderr: ""
Feb 16 13:10:22.464: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 16 13:10:22.464: INFO: validating pod update-demo-nautilus-sm46x
Feb 16 13:10:22.487: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 16 13:10:22.487: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 16 13:10:22.487: INFO: update-demo-nautilus-sm46x is verified up and running
STEP: rolling-update to new replication controller
Feb 16 13:10:22.491: INFO: scanned /root for discovery docs: 
Feb 16 13:10:22.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-thct6'
Feb 16 13:11:01.164: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 16 13:11:01.164: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 16 13:11:01.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-thct6'
Feb 16 13:11:01.310: INFO: stderr: ""
Feb 16 13:11:01.311: INFO: stdout: "update-demo-kitten-ssz2c update-demo-kitten-t5lqw update-demo-nautilus-d2gzk "
STEP: Replicas for name=update-demo: expected=2 actual=3
Feb 16 13:11:06.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-thct6'
Feb 16 13:11:06.498: INFO: stderr: ""
Feb 16 13:11:06.498: INFO: stdout: "update-demo-kitten-ssz2c update-demo-kitten-t5lqw "
Feb 16 13:11:06.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ssz2c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-thct6'
Feb 16 13:11:06.690: INFO: stderr: ""
Feb 16 13:11:06.690: INFO: stdout: "true"
Feb 16 13:11:06.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ssz2c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-thct6'
Feb 16 13:11:06.865: INFO: stderr: ""
Feb 16 13:11:06.866: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 16 13:11:06.866: INFO: validating pod update-demo-kitten-ssz2c
Feb 16 13:11:06.950: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 16 13:11:06.950: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 16 13:11:06.951: INFO: update-demo-kitten-ssz2c is verified up and running
Feb 16 13:11:06.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t5lqw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-thct6'
Feb 16 13:11:07.119: INFO: stderr: ""
Feb 16 13:11:07.119: INFO: stdout: "true"
Feb 16 13:11:07.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t5lqw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-thct6'
Feb 16 13:11:07.242: INFO: stderr: ""
Feb 16 13:11:07.243: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 16 13:11:07.243: INFO: validating pod update-demo-kitten-t5lqw
Feb 16 13:11:07.278: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 16 13:11:07.278: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 16 13:11:07.278: INFO: update-demo-kitten-t5lqw is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:11:07.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-thct6" for this suite.
Feb 16 13:11:49.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:11:49.547: INFO: namespace: e2e-tests-kubectl-thct6, resource: bindings, ignored listing per whitelist
Feb 16 13:11:49.608: INFO: namespace e2e-tests-kubectl-thct6 deletion completed in 42.30777296s

• [SLOW TEST:104.152 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:11:49.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 16 13:11:49.929: INFO: Creating deployment "test-recreate-deployment"
Feb 16 13:11:49.965: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 16 13:11:50.124: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Feb 16 13:11:53.654: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 16 13:11:54.067: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 13:11:56.376: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 13:11:58.085: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 13:12:01.130: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 13:12:02.080: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 13:12:04.092: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 13:12:06.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717455510, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 16 13:12:08.135: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 16 13:12:08.162: INFO: Updating deployment test-recreate-deployment
Feb 16 13:12:08.162: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 16 13:12:10.692: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-fc9wz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fc9wz/deployments/test-recreate-deployment,UID:e468db6d-50bd-11ea-a994-fa163e34d433,ResourceVersion:21873770,Generation:2,CreationTimestamp:2020-02-16 13:11:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-16 13:12:10 +0000 UTC 2020-02-16 13:12:10 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-16 13:12:10 +0000 UTC 2020-02-16 13:11:50 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb 16 13:12:10.709: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-fc9wz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fc9wz/replicasets/test-recreate-deployment-589c4bfd,UID:ef935316-50bd-11ea-a994-fa163e34d433,ResourceVersion:21873768,Generation:1,CreationTimestamp:2020-02-16 13:12:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e468db6d-50bd-11ea-a994-fa163e34d433 0xc001d5fddf 0xc001d5fdf0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 16 13:12:10.709: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 16 13:12:10.710: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-fc9wz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fc9wz/replicasets/test-recreate-deployment-5bf7f65dc,UID:e483e188-50bd-11ea-a994-fa163e34d433,ResourceVersion:21873759,Generation:2,CreationTimestamp:2020-02-16 13:11:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e468db6d-50bd-11ea-a994-fa163e34d433 0xc001d5fec0 0xc001d5fec1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 16 13:12:10.719: INFO: Pod "test-recreate-deployment-589c4bfd-b6g7n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-b6g7n,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-fc9wz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fc9wz/pods/test-recreate-deployment-589c4bfd-b6g7n,UID:efa84e19-50bd-11ea-a994-fa163e34d433,ResourceVersion:21873773,Generation:0,CreationTimestamp:2020-02-16 13:12:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd ef935316-50bd-11ea-a994-fa163e34d433 0xc001ec509f 0xc001ec50b0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mhwhd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mhwhd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-mhwhd true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ec5110} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ec5130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:12:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:12:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:12:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 13:12:08 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-16 13:12:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:12:10.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-fc9wz" for this suite.
Feb 16 13:12:20.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:12:20.688: INFO: namespace: e2e-tests-deployment-fc9wz, resource: bindings, ignored listing per whitelist
Feb 16 13:12:20.713: INFO: namespace e2e-tests-deployment-fc9wz deletion completed in 9.987798938s

• [SLOW TEST:31.105 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:12:20.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 16 13:12:55.038: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-phwr7 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 13:12:55.038: INFO: >>> kubeConfig: /root/.kube/config
I0216 13:12:55.149544       9 log.go:172] (0xc000acaf20) (0xc0008b2820) Create stream
I0216 13:12:55.149838       9 log.go:172] (0xc000acaf20) (0xc0008b2820) Stream added, broadcasting: 1
I0216 13:12:55.161104       9 log.go:172] (0xc000acaf20) Reply frame received for 1
I0216 13:12:55.161283       9 log.go:172] (0xc000acaf20) (0xc001d1c640) Create stream
I0216 13:12:55.161312       9 log.go:172] (0xc000acaf20) (0xc001d1c640) Stream added, broadcasting: 3
I0216 13:12:55.166048       9 log.go:172] (0xc000acaf20) Reply frame received for 3
I0216 13:12:55.166264       9 log.go:172] (0xc000acaf20) (0xc0008b2be0) Create stream
I0216 13:12:55.166356       9 log.go:172] (0xc000acaf20) (0xc0008b2be0) Stream added, broadcasting: 5
I0216 13:12:55.169714       9 log.go:172] (0xc000acaf20) Reply frame received for 5
I0216 13:12:55.362749       9 log.go:172] (0xc000acaf20) Data frame received for 3
I0216 13:12:55.362845       9 log.go:172] (0xc001d1c640) (3) Data frame handling
I0216 13:12:55.362892       9 log.go:172] (0xc001d1c640) (3) Data frame sent
I0216 13:12:55.503278       9 log.go:172] (0xc000acaf20) Data frame received for 1
I0216 13:12:55.503437       9 log.go:172] (0xc000acaf20) (0xc001d1c640) Stream removed, broadcasting: 3
I0216 13:12:55.503569       9 log.go:172] (0xc0008b2820) (1) Data frame handling
I0216 13:12:55.503604       9 log.go:172] (0xc0008b2820) (1) Data frame sent
I0216 13:12:55.503616       9 log.go:172] (0xc000acaf20) (0xc0008b2820) Stream removed, broadcasting: 1
I0216 13:12:55.503756       9 log.go:172] (0xc000acaf20) (0xc0008b2be0) Stream removed, broadcasting: 5
I0216 13:12:55.503882       9 log.go:172] (0xc000acaf20) (0xc0008b2820) Stream removed, broadcasting: 1
I0216 13:12:55.503892       9 log.go:172] (0xc000acaf20) (0xc001d1c640) Stream removed, broadcasting: 3
I0216 13:12:55.503895       9 log.go:172] (0xc000acaf20) (0xc0008b2be0) Stream removed, broadcasting: 5
Feb 16 13:12:55.504: INFO: Exec stderr: ""
I0216 13:12:55.504655       9 log.go:172] (0xc000acaf20) Go away received
Feb 16 13:12:55.504: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-phwr7 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 13:12:55.504: INFO: >>> kubeConfig: /root/.kube/config
I0216 13:12:55.635904       9 log.go:172] (0xc000512630) (0xc0019fc6e0) Create stream
I0216 13:12:55.636257       9 log.go:172] (0xc000512630) (0xc0019fc6e0) Stream added, broadcasting: 1
I0216 13:12:55.659917       9 log.go:172] (0xc000512630) Reply frame received for 1
I0216 13:12:55.660181       9 log.go:172] (0xc000512630) (0xc001296000) Create stream
I0216 13:12:55.660210       9 log.go:172] (0xc000512630) (0xc001296000) Stream added, broadcasting: 3
I0216 13:12:55.662456       9 log.go:172] (0xc000512630) Reply frame received for 3
I0216 13:12:55.662514       9 log.go:172] (0xc000512630) (0xc0019fc780) Create stream
I0216 13:12:55.662541       9 log.go:172] (0xc000512630) (0xc0019fc780) Stream added, broadcasting: 5
I0216 13:12:55.665444       9 log.go:172] (0xc000512630) Reply frame received for 5
I0216 13:12:55.868715       9 log.go:172] (0xc000512630) Data frame received for 3
I0216 13:12:55.868881       9 log.go:172] (0xc001296000) (3) Data frame handling
I0216 13:12:55.868910       9 log.go:172] (0xc001296000) (3) Data frame sent
I0216 13:12:56.006167       9 log.go:172] (0xc000512630) Data frame received for 1
I0216 13:12:56.006321       9 log.go:172] (0xc000512630) (0xc001296000) Stream removed, broadcasting: 3
I0216 13:12:56.006379       9 log.go:172] (0xc0019fc6e0) (1) Data frame handling
I0216 13:12:56.006422       9 log.go:172] (0xc0019fc6e0) (1) Data frame sent
I0216 13:12:56.006472       9 log.go:172] (0xc000512630) (0xc0019fc780) Stream removed, broadcasting: 5
I0216 13:12:56.006531       9 log.go:172] (0xc000512630) (0xc0019fc6e0) Stream removed, broadcasting: 1
I0216 13:12:56.006584       9 log.go:172] (0xc000512630) Go away received
I0216 13:12:56.006840       9 log.go:172] (0xc000512630) (0xc0019fc6e0) Stream removed, broadcasting: 1
I0216 13:12:56.006858       9 log.go:172] (0xc000512630) (0xc001296000) Stream removed, broadcasting: 3
I0216 13:12:56.006870       9 log.go:172] (0xc000512630) (0xc0019fc780) Stream removed, broadcasting: 5
Feb 16 13:12:56.006: INFO: Exec stderr: ""
Feb 16 13:12:56.007: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-phwr7 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 13:12:56.007: INFO: >>> kubeConfig: /root/.kube/config
I0216 13:12:56.104025       9 log.go:172] (0xc0008324d0) (0xc001d1ca00) Create stream
I0216 13:12:56.104198       9 log.go:172] (0xc0008324d0) (0xc001d1ca00) Stream added, broadcasting: 1
I0216 13:12:56.114502       9 log.go:172] (0xc0008324d0) Reply frame received for 1
I0216 13:12:56.114692       9 log.go:172] (0xc0008324d0) (0xc001d1caa0) Create stream
I0216 13:12:56.114715       9 log.go:172] (0xc0008324d0) (0xc001d1caa0) Stream added, broadcasting: 3
I0216 13:12:56.116561       9 log.go:172] (0xc0008324d0) Reply frame received for 3
I0216 13:12:56.116589       9 log.go:172] (0xc0008324d0) (0xc0008b2d20) Create stream
I0216 13:12:56.116597       9 log.go:172] (0xc0008324d0) (0xc0008b2d20) Stream added, broadcasting: 5
I0216 13:12:56.117739       9 log.go:172] (0xc0008324d0) Reply frame received for 5
I0216 13:12:56.244422       9 log.go:172] (0xc0008324d0) Data frame received for 3
I0216 13:12:56.244520       9 log.go:172] (0xc001d1caa0) (3) Data frame handling
I0216 13:12:56.244590       9 log.go:172] (0xc001d1caa0) (3) Data frame sent
I0216 13:12:56.399645       9 log.go:172] (0xc0008324d0) Data frame received for 1
I0216 13:12:56.399784       9 log.go:172] (0xc0008324d0) (0xc001d1caa0) Stream removed, broadcasting: 3
I0216 13:12:56.399850       9 log.go:172] (0xc001d1ca00) (1) Data frame handling
I0216 13:12:56.399885       9 log.go:172] (0xc001d1ca00) (1) Data frame sent
I0216 13:12:56.399917       9 log.go:172] (0xc0008324d0) (0xc0008b2d20) Stream removed, broadcasting: 5
I0216 13:12:56.399999       9 log.go:172] (0xc0008324d0) (0xc001d1ca00) Stream removed, broadcasting: 1
I0216 13:12:56.400031       9 log.go:172] (0xc0008324d0) Go away received
I0216 13:12:56.400210       9 log.go:172] (0xc0008324d0) (0xc001d1ca00) Stream removed, broadcasting: 1
I0216 13:12:56.400230       9 log.go:172] (0xc0008324d0) (0xc001d1caa0) Stream removed, broadcasting: 3
I0216 13:12:56.400240       9 log.go:172] (0xc0008324d0) (0xc0008b2d20) Stream removed, broadcasting: 5
Feb 16 13:12:56.400: INFO: Exec stderr: ""
Feb 16 13:12:56.400: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-phwr7 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 13:12:56.400: INFO: >>> kubeConfig: /root/.kube/config
I0216 13:12:56.484197       9 log.go:172] (0xc0008329a0) (0xc001d1cf00) Create stream
I0216 13:12:56.484396       9 log.go:172] (0xc0008329a0) (0xc001d1cf00) Stream added, broadcasting: 1
I0216 13:12:56.493533       9 log.go:172] (0xc0008329a0) Reply frame received for 1
I0216 13:12:56.493596       9 log.go:172] (0xc0008329a0) (0xc0019fc820) Create stream
I0216 13:12:56.493619       9 log.go:172] (0xc0008329a0) (0xc0019fc820) Stream added, broadcasting: 3
I0216 13:12:56.495379       9 log.go:172] (0xc0008329a0) Reply frame received for 3
I0216 13:12:56.495431       9 log.go:172] (0xc0008329a0) (0xc0019fc960) Create stream
I0216 13:12:56.495454       9 log.go:172] (0xc0008329a0) (0xc0019fc960) Stream added, broadcasting: 5
I0216 13:12:56.497071       9 log.go:172] (0xc0008329a0) Reply frame received for 5
I0216 13:12:56.755811       9 log.go:172] (0xc0008329a0) Data frame received for 3
I0216 13:12:56.755912       9 log.go:172] (0xc0019fc820) (3) Data frame handling
I0216 13:12:56.755943       9 log.go:172] (0xc0019fc820) (3) Data frame sent
I0216 13:12:56.928001       9 log.go:172] (0xc0008329a0) Data frame received for 1
I0216 13:12:56.928172       9 log.go:172] (0xc0008329a0) (0xc0019fc820) Stream removed, broadcasting: 3
I0216 13:12:56.928258       9 log.go:172] (0xc001d1cf00) (1) Data frame handling
I0216 13:12:56.928284       9 log.go:172] (0xc001d1cf00) (1) Data frame sent
I0216 13:12:56.928327       9 log.go:172] (0xc0008329a0) (0xc0019fc960) Stream removed, broadcasting: 5
I0216 13:12:56.928404       9 log.go:172] (0xc0008329a0) (0xc001d1cf00) Stream removed, broadcasting: 1
I0216 13:12:56.928463       9 log.go:172] (0xc0008329a0) Go away received
I0216 13:12:56.928729       9 log.go:172] (0xc0008329a0) (0xc001d1cf00) Stream removed, broadcasting: 1
I0216 13:12:56.928755       9 log.go:172] (0xc0008329a0) (0xc0019fc820) Stream removed, broadcasting: 3
I0216 13:12:56.928769       9 log.go:172] (0xc0008329a0) (0xc0019fc960) Stream removed, broadcasting: 5
Feb 16 13:12:56.928: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 16 13:12:56.928: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-phwr7 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 13:12:56.929: INFO: >>> kubeConfig: /root/.kube/config
I0216 13:12:57.023867       9 log.go:172] (0xc0021b02c0) (0xc001296500) Create stream
I0216 13:12:57.024147       9 log.go:172] (0xc0021b02c0) (0xc001296500) Stream added, broadcasting: 1
I0216 13:12:57.030330       9 log.go:172] (0xc0021b02c0) Reply frame received for 1
I0216 13:12:57.030369       9 log.go:172] (0xc0021b02c0) (0xc0008b2e60) Create stream
I0216 13:12:57.030383       9 log.go:172] (0xc0021b02c0) (0xc0008b2e60) Stream added, broadcasting: 3
I0216 13:12:57.035714       9 log.go:172] (0xc0021b02c0) Reply frame received for 3
I0216 13:12:57.035786       9 log.go:172] (0xc0021b02c0) (0xc000f50000) Create stream
I0216 13:12:57.035798       9 log.go:172] (0xc0021b02c0) (0xc000f50000) Stream added, broadcasting: 5
I0216 13:12:57.038411       9 log.go:172] (0xc0021b02c0) Reply frame received for 5
I0216 13:12:57.272782       9 log.go:172] (0xc0021b02c0) Data frame received for 3
I0216 13:12:57.272954       9 log.go:172] (0xc0008b2e60) (3) Data frame handling
I0216 13:12:57.272987       9 log.go:172] (0xc0008b2e60) (3) Data frame sent
I0216 13:12:57.387456       9 log.go:172] (0xc0021b02c0) Data frame received for 1
I0216 13:12:57.387567       9 log.go:172] (0xc0021b02c0) (0xc000f50000) Stream removed, broadcasting: 5
I0216 13:12:57.387626       9 log.go:172] (0xc001296500) (1) Data frame handling
I0216 13:12:57.387662       9 log.go:172] (0xc001296500) (1) Data frame sent
I0216 13:12:57.387705       9 log.go:172] (0xc0021b02c0) (0xc0008b2e60) Stream removed, broadcasting: 3
I0216 13:12:57.387765       9 log.go:172] (0xc0021b02c0) (0xc001296500) Stream removed, broadcasting: 1
I0216 13:12:57.387802       9 log.go:172] (0xc0021b02c0) Go away received
I0216 13:12:57.388051       9 log.go:172] (0xc0021b02c0) (0xc001296500) Stream removed, broadcasting: 1
I0216 13:12:57.388067       9 log.go:172] (0xc0021b02c0) (0xc0008b2e60) Stream removed, broadcasting: 3
I0216 13:12:57.388086       9 log.go:172] (0xc0021b02c0) (0xc000f50000) Stream removed, broadcasting: 5
Feb 16 13:12:57.388: INFO: Exec stderr: ""
Feb 16 13:12:57.388: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-phwr7 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 13:12:57.388: INFO: >>> kubeConfig: /root/.kube/config
I0216 13:12:57.461233       9 log.go:172] (0xc000832e70) (0xc001d1d220) Create stream
I0216 13:12:57.461381       9 log.go:172] (0xc000832e70) (0xc001d1d220) Stream added, broadcasting: 1
I0216 13:12:57.467874       9 log.go:172] (0xc000832e70) Reply frame received for 1
I0216 13:12:57.468114       9 log.go:172] (0xc000832e70) (0xc000f500a0) Create stream
I0216 13:12:57.468157       9 log.go:172] (0xc000832e70) (0xc000f500a0) Stream added, broadcasting: 3
I0216 13:12:57.470424       9 log.go:172] (0xc000832e70) Reply frame received for 3
I0216 13:12:57.470528       9 log.go:172] (0xc000832e70) (0xc000f50140) Create stream
I0216 13:12:57.470542       9 log.go:172] (0xc000832e70) (0xc000f50140) Stream added, broadcasting: 5
I0216 13:12:57.471785       9 log.go:172] (0xc000832e70) Reply frame received for 5
I0216 13:12:57.726611       9 log.go:172] (0xc000832e70) Data frame received for 3
I0216 13:12:57.726722       9 log.go:172] (0xc000f500a0) (3) Data frame handling
I0216 13:12:57.726756       9 log.go:172] (0xc000f500a0) (3) Data frame sent
I0216 13:12:57.840726       9 log.go:172] (0xc000832e70) Data frame received for 1
I0216 13:12:57.840934       9 log.go:172] (0xc000832e70) (0xc000f50140) Stream removed, broadcasting: 5
I0216 13:12:57.841133       9 log.go:172] (0xc001d1d220) (1) Data frame handling
I0216 13:12:57.841199       9 log.go:172] (0xc001d1d220) (1) Data frame sent
I0216 13:12:57.841297       9 log.go:172] (0xc000832e70) (0xc000f500a0) Stream removed, broadcasting: 3
I0216 13:12:57.841385       9 log.go:172] (0xc000832e70) (0xc001d1d220) Stream removed, broadcasting: 1
I0216 13:12:57.841429       9 log.go:172] (0xc000832e70) Go away received
I0216 13:12:57.841734       9 log.go:172] (0xc000832e70) (0xc001d1d220) Stream removed, broadcasting: 1
I0216 13:12:57.841770       9 log.go:172] (0xc000832e70) (0xc000f500a0) Stream removed, broadcasting: 3
I0216 13:12:57.841797       9 log.go:172] (0xc000832e70) (0xc000f50140) Stream removed, broadcasting: 5
Feb 16 13:12:57.841: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 16 13:12:57.842: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-phwr7 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 13:12:57.842: INFO: >>> kubeConfig: /root/.kube/config
I0216 13:12:57.930487       9 log.go:172] (0xc000833340) (0xc001d1d4a0) Create stream
I0216 13:12:57.930751       9 log.go:172] (0xc000833340) (0xc001d1d4a0) Stream added, broadcasting: 1
I0216 13:12:57.934601       9 log.go:172] (0xc000833340) Reply frame received for 1
I0216 13:12:57.934672       9 log.go:172] (0xc000833340) (0xc0012965a0) Create stream
I0216 13:12:57.934688       9 log.go:172] (0xc000833340) (0xc0012965a0) Stream added, broadcasting: 3
I0216 13:12:57.935394       9 log.go:172] (0xc000833340) Reply frame received for 3
I0216 13:12:57.935410       9 log.go:172] (0xc000833340) (0xc001d1d540) Create stream
I0216 13:12:57.935417       9 log.go:172] (0xc000833340) (0xc001d1d540) Stream added, broadcasting: 5
I0216 13:12:57.936128       9 log.go:172] (0xc000833340) Reply frame received for 5
I0216 13:12:58.041652       9 log.go:172] (0xc000833340) Data frame received for 3
I0216 13:12:58.042319       9 log.go:172] (0xc0012965a0) (3) Data frame handling
I0216 13:12:58.042432       9 log.go:172] (0xc0012965a0) (3) Data frame sent
I0216 13:12:58.173506       9 log.go:172] (0xc000833340) (0xc0012965a0) Stream removed, broadcasting: 3
I0216 13:12:58.173820       9 log.go:172] (0xc000833340) Data frame received for 1
I0216 13:12:58.173922       9 log.go:172] (0xc000833340) (0xc001d1d540) Stream removed, broadcasting: 5
I0216 13:12:58.174013       9 log.go:172] (0xc001d1d4a0) (1) Data frame handling
I0216 13:12:58.174048       9 log.go:172] (0xc001d1d4a0) (1) Data frame sent
I0216 13:12:58.174076       9 log.go:172] (0xc000833340) (0xc001d1d4a0) Stream removed, broadcasting: 1
I0216 13:12:58.174099       9 log.go:172] (0xc000833340) Go away received
I0216 13:12:58.174852       9 log.go:172] (0xc000833340) (0xc001d1d4a0) Stream removed, broadcasting: 1
I0216 13:12:58.174894       9 log.go:172] (0xc000833340) (0xc0012965a0) Stream removed, broadcasting: 3
I0216 13:12:58.174912       9 log.go:172] (0xc000833340) (0xc001d1d540) Stream removed, broadcasting: 5
Feb 16 13:12:58.175: INFO: Exec stderr: ""
Feb 16 13:12:58.175: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-phwr7 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 13:12:58.175: INFO: >>> kubeConfig: /root/.kube/config
I0216 13:12:58.264827       9 log.go:172] (0xc000acb3f0) (0xc0008b30e0) Create stream
I0216 13:12:58.264927       9 log.go:172] (0xc000acb3f0) (0xc0008b30e0) Stream added, broadcasting: 1
I0216 13:12:58.272413       9 log.go:172] (0xc000acb3f0) Reply frame received for 1
I0216 13:12:58.272448       9 log.go:172] (0xc000acb3f0) (0xc0014a0000) Create stream
I0216 13:12:58.272460       9 log.go:172] (0xc000acb3f0) (0xc0014a0000) Stream added, broadcasting: 3
I0216 13:12:58.273468       9 log.go:172] (0xc000acb3f0) Reply frame received for 3
I0216 13:12:58.273499       9 log.go:172] (0xc000acb3f0) (0xc0014a00a0) Create stream
I0216 13:12:58.273511       9 log.go:172] (0xc000acb3f0) (0xc0014a00a0) Stream added, broadcasting: 5
I0216 13:12:58.274712       9 log.go:172] (0xc000acb3f0) Reply frame received for 5
I0216 13:12:58.403321       9 log.go:172] (0xc000acb3f0) Data frame received for 3
I0216 13:12:58.403418       9 log.go:172] (0xc0014a0000) (3) Data frame handling
I0216 13:12:58.403454       9 log.go:172] (0xc0014a0000) (3) Data frame sent
I0216 13:12:58.700899       9 log.go:172] (0xc000acb3f0) (0xc0014a00a0) Stream removed, broadcasting: 5
I0216 13:12:58.701270       9 log.go:172] (0xc000acb3f0) Data frame received for 1
I0216 13:12:58.701324       9 log.go:172] (0xc0008b30e0) (1) Data frame handling
I0216 13:12:58.701515       9 log.go:172] (0xc0008b30e0) (1) Data frame sent
I0216 13:12:58.701723       9 log.go:172] (0xc000acb3f0) (0xc0014a0000) Stream removed, broadcasting: 3
I0216 13:12:58.702168       9 log.go:172] (0xc000acb3f0) (0xc0008b30e0) Stream removed, broadcasting: 1
I0216 13:12:58.702251       9 log.go:172] (0xc000acb3f0) Go away received
I0216 13:12:58.702847       9 log.go:172] (0xc000acb3f0) (0xc0008b30e0) Stream removed, broadcasting: 1
I0216 13:12:58.702881       9 log.go:172] (0xc000acb3f0) (0xc0014a0000) Stream removed, broadcasting: 3
I0216 13:12:58.702910       9 log.go:172] (0xc000acb3f0) (0xc0014a00a0) Stream removed, broadcasting: 5
Feb 16 13:12:58.703: INFO: Exec stderr: ""
Feb 16 13:12:58.703: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-phwr7 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 13:12:58.703: INFO: >>> kubeConfig: /root/.kube/config
I0216 13:12:58.880932       9 log.go:172] (0xc0021b0790) (0xc0012968c0) Create stream
I0216 13:12:58.881259       9 log.go:172] (0xc0021b0790) (0xc0012968c0) Stream added, broadcasting: 1
I0216 13:12:58.904515       9 log.go:172] (0xc0021b0790) Reply frame received for 1
I0216 13:12:58.904635       9 log.go:172] (0xc0021b0790) (0xc0019fcb40) Create stream
I0216 13:12:58.904663       9 log.go:172] (0xc0021b0790) (0xc0019fcb40) Stream added, broadcasting: 3
I0216 13:12:58.907885       9 log.go:172] (0xc0021b0790) Reply frame received for 3
I0216 13:12:58.908031       9 log.go:172] (0xc0021b0790) (0xc0014a0140) Create stream
I0216 13:12:58.908062       9 log.go:172] (0xc0021b0790) (0xc0014a0140) Stream added, broadcasting: 5
I0216 13:12:58.909771       9 log.go:172] (0xc0021b0790) Reply frame received for 5
I0216 13:12:59.038469       9 log.go:172] (0xc0021b0790) Data frame received for 3
I0216 13:12:59.038602       9 log.go:172] (0xc0019fcb40) (3) Data frame handling
I0216 13:12:59.038658       9 log.go:172] (0xc0019fcb40) (3) Data frame sent
I0216 13:12:59.159448       9 log.go:172] (0xc0021b0790) Data frame received for 1
I0216 13:12:59.159617       9 log.go:172] (0xc0021b0790) (0xc0019fcb40) Stream removed, broadcasting: 3
I0216 13:12:59.159677       9 log.go:172] (0xc0012968c0) (1) Data frame handling
I0216 13:12:59.159718       9 log.go:172] (0xc0012968c0) (1) Data frame sent
I0216 13:12:59.159782       9 log.go:172] (0xc0021b0790) (0xc0014a0140) Stream removed, broadcasting: 5
I0216 13:12:59.159847       9 log.go:172] (0xc0021b0790) (0xc0012968c0) Stream removed, broadcasting: 1
I0216 13:12:59.159875       9 log.go:172] (0xc0021b0790) Go away received
I0216 13:12:59.160707       9 log.go:172] (0xc0021b0790) (0xc0012968c0) Stream removed, broadcasting: 1
I0216 13:12:59.160789       9 log.go:172] (0xc0021b0790) (0xc0019fcb40) Stream removed, broadcasting: 3
I0216 13:12:59.160807       9 log.go:172] (0xc0021b0790) (0xc0014a0140) Stream removed, broadcasting: 5
Feb 16 13:12:59.160: INFO: Exec stderr: ""
Feb 16 13:12:59.160: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-phwr7 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 16 13:12:59.161: INFO: >>> kubeConfig: /root/.kube/config
I0216 13:12:59.295805       9 log.go:172] (0xc0021b0c60) (0xc001296b40) Create stream
I0216 13:12:59.296011       9 log.go:172] (0xc0021b0c60) (0xc001296b40) Stream added, broadcasting: 1
I0216 13:12:59.300035       9 log.go:172] (0xc0021b0c60) Reply frame received for 1
I0216 13:12:59.300080       9 log.go:172] (0xc0021b0c60) (0xc001d1d680) Create stream
I0216 13:12:59.300094       9 log.go:172] (0xc0021b0c60) (0xc001d1d680) Stream added, broadcasting: 3
I0216 13:12:59.301313       9 log.go:172] (0xc0021b0c60) Reply frame received for 3
I0216 13:12:59.301344       9 log.go:172] (0xc0021b0c60) (0xc0014a01e0) Create stream
I0216 13:12:59.301358       9 log.go:172] (0xc0021b0c60) (0xc0014a01e0) Stream added, broadcasting: 5
I0216 13:12:59.302332       9 log.go:172] (0xc0021b0c60) Reply frame received for 5
I0216 13:12:59.418530       9 log.go:172] (0xc0021b0c60) Data frame received for 3
I0216 13:12:59.418791       9 log.go:172] (0xc001d1d680) (3) Data frame handling
I0216 13:12:59.418844       9 log.go:172] (0xc001d1d680) (3) Data frame sent
I0216 13:12:59.613613       9 log.go:172] (0xc0021b0c60) Data frame received for 1
I0216 13:12:59.613759       9 log.go:172] (0xc0021b0c60) (0xc001d1d680) Stream removed, broadcasting: 3
I0216 13:12:59.613848       9 log.go:172] (0xc001296b40) (1) Data frame handling
I0216 13:12:59.613871       9 log.go:172] (0xc001296b40) (1) Data frame sent
I0216 13:12:59.613931       9 log.go:172] (0xc0021b0c60) (0xc0014a01e0) Stream removed, broadcasting: 5
I0216 13:12:59.614005       9 log.go:172] (0xc0021b0c60) (0xc001296b40) Stream removed, broadcasting: 1
I0216 13:12:59.614029       9 log.go:172] (0xc0021b0c60) Go away received
I0216 13:12:59.614351       9 log.go:172] (0xc0021b0c60) (0xc001296b40) Stream removed, broadcasting: 1
I0216 13:12:59.614367       9 log.go:172] (0xc0021b0c60) (0xc001d1d680) Stream removed, broadcasting: 3
I0216 13:12:59.614378       9 log.go:172] (0xc0021b0c60) (0xc0014a01e0) Stream removed, broadcasting: 5
Feb 16 13:12:59.614: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:12:59.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-phwr7" for this suite.
Feb 16 13:13:53.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:13:53.842: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-phwr7, resource: bindings, ignored listing per whitelist
Feb 16 13:13:53.878: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-phwr7 deletion completed in 54.238835478s

• [SLOW TEST:93.164 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:13:53.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 16 13:13:54.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-kld2j'
Feb 16 13:13:54.379: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 16 13:13:54.379: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Feb 16 13:13:58.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-kld2j'
Feb 16 13:13:58.650: INFO: stderr: ""
Feb 16 13:13:58.650: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:13:58.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kld2j" for this suite.
Feb 16 13:14:04.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:14:04.930: INFO: namespace: e2e-tests-kubectl-kld2j, resource: bindings, ignored listing per whitelist
Feb 16 13:14:05.043: INFO: namespace e2e-tests-kubectl-kld2j deletion completed in 6.371001775s

• [SLOW TEST:11.165 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:14:05.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-351ab9d7-50be-11ea-aa00-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 16 13:14:05.360: INFO: Waiting up to 5m0s for pod "pod-secrets-351cf5e2-50be-11ea-aa00-0242ac110008" in namespace "e2e-tests-secrets-gmn2m" to be "success or failure"
Feb 16 13:14:05.382: INFO: Pod "pod-secrets-351cf5e2-50be-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 22.168705ms
Feb 16 13:14:07.543: INFO: Pod "pod-secrets-351cf5e2-50be-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182999113s
Feb 16 13:14:09.559: INFO: Pod "pod-secrets-351cf5e2-50be-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.19897668s
Feb 16 13:14:11.815: INFO: Pod "pod-secrets-351cf5e2-50be-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.455685443s
Feb 16 13:14:13.846: INFO: Pod "pod-secrets-351cf5e2-50be-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.486631239s
Feb 16 13:14:16.924: INFO: Pod "pod-secrets-351cf5e2-50be-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.564335104s
Feb 16 13:14:18.949: INFO: Pod "pod-secrets-351cf5e2-50be-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.589544316s
STEP: Saw pod success
Feb 16 13:14:18.949: INFO: Pod "pod-secrets-351cf5e2-50be-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 13:14:18.961: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-351cf5e2-50be-11ea-aa00-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 16 13:14:19.954: INFO: Waiting for pod pod-secrets-351cf5e2-50be-11ea-aa00-0242ac110008 to disappear
Feb 16 13:14:19.995: INFO: Pod pod-secrets-351cf5e2-50be-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:14:19.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-gmn2m" for this suite.
Feb 16 13:14:28.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:14:29.576: INFO: namespace: e2e-tests-secrets-gmn2m, resource: bindings, ignored listing per whitelist
Feb 16 13:14:29.867: INFO: namespace e2e-tests-secrets-gmn2m deletion completed in 9.597981795s

• [SLOW TEST:24.824 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:14:29.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:14:47.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-qrmdj" for this suite.
Feb 16 13:14:53.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:14:53.295: INFO: namespace: e2e-tests-kubelet-test-qrmdj, resource: bindings, ignored listing per whitelist
Feb 16 13:14:53.366: INFO: namespace e2e-tests-kubelet-test-qrmdj deletion completed in 6.271396265s

• [SLOW TEST:23.499 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:14:53.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 16 13:14:53.564: INFO: Waiting up to 5m0s for pod "downward-api-51d928c4-50be-11ea-aa00-0242ac110008" in namespace "e2e-tests-downward-api-g8pss" to be "success or failure"
Feb 16 13:14:53.580: INFO: Pod "downward-api-51d928c4-50be-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.921999ms
Feb 16 13:14:55.594: INFO: Pod "downward-api-51d928c4-50be-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029725953s
Feb 16 13:14:57.615: INFO: Pod "downward-api-51d928c4-50be-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050535936s
Feb 16 13:15:00.397: INFO: Pod "downward-api-51d928c4-50be-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.833183468s
Feb 16 13:15:02.411: INFO: Pod "downward-api-51d928c4-50be-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.846620658s
Feb 16 13:15:04.427: INFO: Pod "downward-api-51d928c4-50be-11ea-aa00-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.862433533s
Feb 16 13:15:06.453: INFO: Pod "downward-api-51d928c4-50be-11ea-aa00-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.888689284s
STEP: Saw pod success
Feb 16 13:15:06.453: INFO: Pod "downward-api-51d928c4-50be-11ea-aa00-0242ac110008" satisfied condition "success or failure"
Feb 16 13:15:06.461: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-51d928c4-50be-11ea-aa00-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 16 13:15:06.612: INFO: Waiting for pod downward-api-51d928c4-50be-11ea-aa00-0242ac110008 to disappear
Feb 16 13:15:06.627: INFO: Pod downward-api-51d928c4-50be-11ea-aa00-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:15:06.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-g8pss" for this suite.
Feb 16 13:15:12.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:15:13.000: INFO: namespace: e2e-tests-downward-api-g8pss, resource: bindings, ignored listing per whitelist
Feb 16 13:15:13.062: INFO: namespace e2e-tests-downward-api-g8pss deletion completed in 6.402707043s

• [SLOW TEST:19.695 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:15:13.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 16 13:15:25.949: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5da440e6-50be-11ea-aa00-0242ac110008"
Feb 16 13:15:25.949: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5da440e6-50be-11ea-aa00-0242ac110008" in namespace "e2e-tests-pods-fxclr" to be "terminated due to deadline exceeded"
Feb 16 13:15:25.984: INFO: Pod "pod-update-activedeadlineseconds-5da440e6-50be-11ea-aa00-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 35.044252ms
Feb 16 13:15:27.995: INFO: Pod "pod-update-activedeadlineseconds-5da440e6-50be-11ea-aa00-0242ac110008": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.045280214s
Feb 16 13:15:27.995: INFO: Pod "pod-update-activedeadlineseconds-5da440e6-50be-11ea-aa00-0242ac110008" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:15:27.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-fxclr" for this suite.
Feb 16 13:15:34.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:15:34.694: INFO: namespace: e2e-tests-pods-fxclr, resource: bindings, ignored listing per whitelist
Feb 16 13:15:34.694: INFO: namespace e2e-tests-pods-fxclr deletion completed in 6.691764514s

• [SLOW TEST:21.632 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:15:34.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-b28qh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-b28qh to expose endpoints map[]
Feb 16 13:15:35.172: INFO: Get endpoints failed (68.809638ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb 16 13:15:36.186: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-b28qh exposes endpoints map[] (1.082996577s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-b28qh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-b28qh to expose endpoints map[pod1:[80]]
Feb 16 13:15:40.904: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.695019917s elapsed, will retry)
Feb 16 13:15:46.985: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (10.775817525s elapsed, will retry)
Feb 16 13:15:49.058: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-b28qh exposes endpoints map[pod1:[80]] (12.849327462s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-b28qh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-b28qh to expose endpoints map[pod1:[80] pod2:[80]]
Feb 16 13:15:53.496: INFO: Unexpected endpoints: found map[6b45afb8-50be-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.343414328s elapsed, will retry)
Feb 16 13:16:01.487: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-b28qh exposes endpoints map[pod1:[80] pod2:[80]] (12.334728002s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-b28qh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-b28qh to expose endpoints map[pod2:[80]]
Feb 16 13:16:02.856: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-b28qh exposes endpoints map[pod2:[80]] (1.217880502s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-b28qh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-b28qh to expose endpoints map[]
Feb 16 13:16:06.126: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-b28qh exposes endpoints map[] (3.26496494s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:16:06.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-b28qh" for this suite.
Feb 16 13:16:30.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:16:31.358: INFO: namespace: e2e-tests-services-b28qh, resource: bindings, ignored listing per whitelist
Feb 16 13:16:31.380: INFO: namespace e2e-tests-services-b28qh deletion completed in 25.004507494s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:56.686 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 16 13:16:31.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 16 13:16:31.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-6pqkp'
Feb 16 13:16:31.799: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 16 13:16:31.799: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb 16 13:16:34.209: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-s7452]
Feb 16 13:16:34.209: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-s7452" in namespace "e2e-tests-kubectl-6pqkp" to be "running and ready"
Feb 16 13:16:34.220: INFO: Pod "e2e-test-nginx-rc-s7452": Phase="Pending", Reason="", readiness=false. Elapsed: 10.854313ms
Feb 16 13:16:36.240: INFO: Pod "e2e-test-nginx-rc-s7452": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031233329s
Feb 16 13:16:38.267: INFO: Pod "e2e-test-nginx-rc-s7452": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057588752s
Feb 16 13:16:40.781: INFO: Pod "e2e-test-nginx-rc-s7452": Phase="Pending", Reason="", readiness=false. Elapsed: 6.572174117s
Feb 16 13:16:42.801: INFO: Pod "e2e-test-nginx-rc-s7452": Phase="Pending", Reason="", readiness=false. Elapsed: 8.591372309s
Feb 16 13:16:44.828: INFO: Pod "e2e-test-nginx-rc-s7452": Phase="Running", Reason="", readiness=true. Elapsed: 10.618558569s
Feb 16 13:16:44.828: INFO: Pod "e2e-test-nginx-rc-s7452" satisfied condition "running and ready"
Feb 16 13:16:44.828: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-s7452]
Feb 16 13:16:44.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-6pqkp'
Feb 16 13:16:45.055: INFO: stderr: ""
Feb 16 13:16:45.055: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Feb 16 13:16:45.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-6pqkp'
Feb 16 13:16:45.272: INFO: stderr: ""
Feb 16 13:16:45.272: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 16 13:16:45.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6pqkp" for this suite.
Feb 16 13:17:09.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 16 13:17:09.576: INFO: namespace: e2e-tests-kubectl-6pqkp, resource: bindings, ignored listing per whitelist
Feb 16 13:17:09.587: INFO: namespace e2e-tests-kubectl-6pqkp deletion completed in 24.296014353s

• [SLOW TEST:38.208 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSFeb 16 13:17:09.588: INFO: Running AfterSuite actions on all nodes
Feb 16 13:17:09.588: INFO: Running AfterSuite actions on node 1
Feb 16 13:17:09.588: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8994.222 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS