I0107 10:47:15.788682 8 e2e.go:224] Starting e2e run "111a393b-313b-11ea-8b51-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578394034 - Will randomize all specs Will run 201 of 2164 specs Jan 7 10:47:16.416: INFO: >>> kubeConfig: /root/.kube/config Jan 7 10:47:16.426: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 7 10:47:16.465: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 7 10:47:16.548: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 7 10:47:16.548: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 7 10:47:16.548: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 7 10:47:16.577: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 7 10:47:16.577: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 7 10:47:16.578: INFO: e2e test version: v1.13.12 Jan 7 10:47:16.580: INFO: kube-apiserver version: v1.13.8 SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 7 10:47:16.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Jan 7 10:47:16.775: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Jan 7 10:47:16.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8llbw' Jan 7 10:47:18.887: INFO: stderr: "" Jan 7 10:47:18.887: INFO: stdout: "pod/pause created\n" Jan 7 10:47:18.888: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 7 10:47:18.888: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-8llbw" to be "running and ready" Jan 7 10:47:19.001: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 113.451193ms Jan 7 10:47:21.015: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127062607s Jan 7 10:47:23.027: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139121811s Jan 7 10:47:25.050: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.162063663s Jan 7 10:47:27.097: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.20930368s Jan 7 10:47:29.114: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.225701462s Jan 7 10:47:29.114: INFO: Pod "pause" satisfied condition "running and ready" Jan 7 10:47:29.114: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Jan 7 10:47:29.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-8llbw' Jan 7 10:47:29.369: INFO: stderr: "" Jan 7 10:47:29.369: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 7 10:47:29.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-8llbw' Jan 7 10:47:29.542: INFO: stderr: "" Jan 7 10:47:29.543: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 7 10:47:29.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-8llbw' Jan 7 10:47:29.660: INFO: stderr: "" Jan 7 10:47:29.661: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 7 10:47:29.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-8llbw' Jan 7 10:47:29.785: INFO: stderr: "" Jan 7 10:47:29.786: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Jan 7 10:47:29.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8llbw' Jan 7 10:47:30.041: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 7 10:47:30.041: INFO: stdout: "pod \"pause\" force deleted\n" Jan 7 10:47:30.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-8llbw' Jan 7 10:47:30.199: INFO: stderr: "No resources found.\n" Jan 7 10:47:30.199: INFO: stdout: "" Jan 7 10:47:30.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-8llbw -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 7 10:47:30.325: INFO: stderr: "" Jan 7 10:47:30.325: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 7 10:47:30.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8llbw" for this suite. Jan 7 10:47:36.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 10:47:36.548: INFO: namespace: e2e-tests-kubectl-8llbw, resource: bindings, ignored listing per whitelist Jan 7 10:47:36.586: INFO: namespace e2e-tests-kubectl-8llbw deletion completed in 6.252649316s • [SLOW TEST:20.006 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 7 10:47:36.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Jan 7 10:47:36.799: INFO: Waiting up to 5m0s for pod "var-expansion-1e353f1a-313b-11ea-8b51-0242ac110005" in namespace "e2e-tests-var-expansion-6ntg4" to be "success or failure" Jan 7 10:47:36.819: INFO: Pod "var-expansion-1e353f1a-313b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.920495ms Jan 7 10:47:38.842: INFO: Pod "var-expansion-1e353f1a-313b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042730402s Jan 7 10:47:40.872: INFO: Pod "var-expansion-1e353f1a-313b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07184666s Jan 7 10:47:42.888: INFO: Pod "var-expansion-1e353f1a-313b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088398672s Jan 7 10:47:44.917: INFO: Pod "var-expansion-1e353f1a-313b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117025371s Jan 7 10:47:46.954: INFO: Pod "var-expansion-1e353f1a-313b-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.15389224s STEP: Saw pod success Jan 7 10:47:46.954: INFO: Pod "var-expansion-1e353f1a-313b-11ea-8b51-0242ac110005" satisfied condition "success or failure" Jan 7 10:47:46.967: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-1e353f1a-313b-11ea-8b51-0242ac110005 container dapi-container: STEP: delete the pod Jan 7 10:47:47.075: INFO: Waiting for pod var-expansion-1e353f1a-313b-11ea-8b51-0242ac110005 to disappear Jan 7 10:47:47.100: INFO: Pod var-expansion-1e353f1a-313b-11ea-8b51-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 7 10:47:47.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-6ntg4" for this suite. Jan 7 10:47:54.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 10:47:54.140: INFO: namespace: e2e-tests-var-expansion-6ntg4, resource: bindings, ignored listing per whitelist Jan 7 10:47:54.191: INFO: namespace e2e-tests-var-expansion-6ntg4 deletion completed in 7.085133292s • [SLOW TEST:17.604 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 7 10:47:54.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 7 10:47:54.378: INFO: Waiting up to 5m0s for pod "downward-api-28a97b99-313b-11ea-8b51-0242ac110005" in namespace "e2e-tests-downward-api-jmbm2" to be "success or failure" Jan 7 10:47:54.432: INFO: Pod "downward-api-28a97b99-313b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 53.770261ms Jan 7 10:47:56.452: INFO: Pod "downward-api-28a97b99-313b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073234048s Jan 7 10:47:58.478: INFO: Pod "downward-api-28a97b99-313b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099210875s Jan 7 10:48:00.953: INFO: Pod "downward-api-28a97b99-313b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.57416787s Jan 7 10:48:02.968: INFO: Pod "downward-api-28a97b99-313b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.589438108s Jan 7 10:48:04.998: INFO: Pod "downward-api-28a97b99-313b-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.619317833s STEP: Saw pod success Jan 7 10:48:04.998: INFO: Pod "downward-api-28a97b99-313b-11ea-8b51-0242ac110005" satisfied condition "success or failure" Jan 7 10:48:05.020: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-28a97b99-313b-11ea-8b51-0242ac110005 container dapi-container: STEP: delete the pod Jan 7 10:48:05.226: INFO: Waiting for pod downward-api-28a97b99-313b-11ea-8b51-0242ac110005 to disappear Jan 7 10:48:05.280: INFO: Pod downward-api-28a97b99-313b-11ea-8b51-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 7 10:48:05.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jmbm2" for this suite. Jan 7 10:48:11.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 10:48:11.426: INFO: namespace: e2e-tests-downward-api-jmbm2, resource: bindings, ignored listing per whitelist Jan 7 10:48:11.467: INFO: namespace e2e-tests-downward-api-jmbm2 deletion completed in 6.167281703s • [SLOW TEST:17.276 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 7 10:48:11.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-8cldv STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-8cldv to expose endpoints map[] Jan 7 10:48:11.774: INFO: Get endpoints failed (6.399103ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jan 7 10:48:12.798: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-8cldv exposes endpoints map[] (1.030552545s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-8cldv STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-8cldv to expose endpoints map[pod1:[80]] Jan 7 10:48:17.076: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.257105737s elapsed, will retry) Jan 7 10:48:22.825: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-8cldv exposes endpoints map[pod1:[80]] (10.005249062s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-8cldv STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-8cldv to expose endpoints map[pod1:[80] pod2:[80]] Jan 7 10:48:27.179: INFO: Unexpected endpoints: found map[33add933-313b-11ea-a994-fa163e34d433:[80]], expected map[pod2:[80] pod1:[80]] (4.348987777s elapsed, will retry) Jan 7 10:48:32.032: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-8cldv exposes endpoints map[pod1:[80] pod2:[80]] (9.2025086s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-8cldv STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-8cldv to expose endpoints map[pod2:[80]] Jan 7 10:48:33.767: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-8cldv exposes endpoints map[pod2:[80]] (1.721369188s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-8cldv STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-8cldv to expose endpoints map[] Jan 7 10:48:34.856: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-8cldv exposes endpoints map[] (1.058444629s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 7 10:48:35.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-8cldv" for this suite. Jan 7 10:48:59.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 10:48:59.397: INFO: namespace: e2e-tests-services-8cldv, resource: bindings, ignored listing per whitelist Jan 7 10:48:59.481: INFO: namespace e2e-tests-services-8cldv deletion completed in 24.346597874s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:48.013 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 7 10:48:59.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 7 10:48:59.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-zzntm' Jan 7 10:48:59.851: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 7 10:48:59.851: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Jan 7 10:48:59.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-zzntm' Jan 7 10:49:00.069: INFO: stderr: "" Jan 7 10:49:00.070: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 7 10:49:00.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zzntm" for this suite. Jan 7 10:49:06.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 10:49:06.577: INFO: namespace: e2e-tests-kubectl-zzntm, resource: bindings, ignored listing per whitelist Jan 7 10:49:06.652: INFO: namespace e2e-tests-kubectl-zzntm deletion completed in 6.468550953s • [SLOW TEST:7.171 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 7 10:49:06.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 7 10:49:06.998: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"53e75b22-313b-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00173d722), BlockOwnerDeletion:(*bool)(0xc00173d723)}} Jan 7 10:49:07.047: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"53e2c679-313b-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00173db32), BlockOwnerDeletion:(*bool)(0xc00173db33)}} Jan 7 10:49:07.217: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"53e52d7a-313b-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001b8111a), BlockOwnerDeletion:(*bool)(0xc001b8111b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 7 10:49:12.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-dm4pz" for this suite. Jan 7 10:49:18.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 10:49:18.721: INFO: namespace: e2e-tests-gc-dm4pz, resource: bindings, ignored listing per whitelist Jan 7 10:49:18.851: INFO: namespace e2e-tests-gc-dm4pz deletion completed in 6.351669025s • [SLOW TEST:12.198 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 7 10:49:18.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 7 10:49:19.381: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 7 10:49:19.543: INFO: Number of nodes with available pods: 0 Jan 7 10:49:19.543: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 7 10:49:19.777: INFO: Number of nodes with available pods: 0 Jan 7 10:49:19.778: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:20.796: INFO: Number of nodes with available pods: 0 Jan 7 10:49:20.796: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:21.821: INFO: Number of nodes with available pods: 0 Jan 7 10:49:21.822: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:22.818: INFO: Number of nodes with available pods: 0 Jan 7 10:49:22.818: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:23.794: INFO: Number of nodes with available pods: 0 Jan 7 10:49:23.794: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:25.075: INFO: Number of nodes with available pods: 0 Jan 7 10:49:25.076: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:26.187: INFO: Number of nodes with available pods: 0 Jan 7 10:49:26.188: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:27.155: INFO: Number of nodes with available pods: 0 Jan 7 10:49:27.155: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:27.795: INFO: Number of nodes with available pods: 0 Jan 7 10:49:27.795: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:28.791: INFO: Number of nodes with available pods: 0 Jan 7 10:49:28.791: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:29.789: INFO: Number of nodes with available pods: 1 Jan 7 10:49:29.789: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 7 10:49:29.898: INFO: Number of nodes with available pods: 1 Jan 7 10:49:29.899: INFO: Number of running nodes: 0, number of available pods: 1 Jan 7 10:49:30.919: INFO: Number of nodes with available pods: 0 Jan 7 10:49:30.920: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 7 10:49:31.114: INFO: Number of nodes with available pods: 0 Jan 7 10:49:31.114: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:32.147: INFO: Number of nodes with available pods: 0 Jan 7 10:49:32.147: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:33.140: INFO: Number of nodes with available pods: 0 Jan 7 10:49:33.140: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:34.131: INFO: Number of nodes with available pods: 0 Jan 7 10:49:34.131: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:35.134: INFO: Number of nodes with available pods: 0 Jan 7 10:49:35.135: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:36.135: INFO: Number of nodes with available pods: 0 Jan 7 10:49:36.135: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:37.129: INFO: Number of nodes with available pods: 0 Jan 7 10:49:37.129: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:38.140: INFO: Number of nodes with available pods: 0 Jan 7 10:49:38.140: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:39.132: INFO: Number of nodes with available pods: 0 Jan 7 10:49:39.132: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:40.144: INFO: Number of nodes with available pods: 0 Jan 7 10:49:40.144: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:41.126: INFO: Number of nodes with available pods: 0 Jan 7 10:49:41.126: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:42.130: INFO: Number of nodes with available pods: 0 Jan 7 10:49:42.131: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:43.145: INFO: Number of nodes with available pods: 0 Jan 7 10:49:43.145: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:44.225: INFO: Number of nodes with available pods: 0 Jan 7 10:49:44.225: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:45.240: INFO: Number of nodes with available pods: 0 Jan 7 10:49:45.240: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:46.137: INFO: Number of nodes with available pods: 0 Jan 7 10:49:46.137: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:47.152: INFO: Number of nodes with available pods: 0 Jan 7 10:49:47.152: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:49.608: INFO: Number of nodes with available pods: 0 Jan 7 10:49:49.608: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:50.232: INFO: Number of nodes with available pods: 0 Jan 7 10:49:50.233: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:51.139: INFO: Number of nodes with available pods: 0 Jan 7 10:49:51.139: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:52.137: INFO: Number of nodes with available pods: 0 Jan 7 10:49:52.137: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 7 10:49:53.129: INFO: Number of nodes with available pods: 1 Jan 7 10:49:53.129: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-np4qg, will wait for the garbage collector to delete the pods Jan 7 10:49:53.228: INFO: Deleting DaemonSet.extensions daemon-set took: 28.492654ms Jan 7 10:49:53.329: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.868741ms Jan 7 10:50:01.322: INFO: Number of nodes with available pods: 0 Jan 7 10:50:01.322: INFO: Number of running nodes: 0, number of available pods: 0 Jan 7 10:50:01.332: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-np4qg/daemonsets","resourceVersion":"17463961"},"items":null} Jan 7 10:50:01.339: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-np4qg/pods","resourceVersion":"17463961"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 7 10:50:01.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-np4qg" for this suite. Jan 7 10:50:09.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 10:50:09.648: INFO: namespace: e2e-tests-daemonsets-np4qg, resource: bindings, ignored listing per whitelist Jan 7 10:50:09.729: INFO: namespace e2e-tests-daemonsets-np4qg deletion completed in 8.324341885s • [SLOW TEST:50.878 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 7 10:50:09.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 7 10:50:09.894: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 7 10:50:09.924: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 7 10:50:15.892: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 7 10:50:19.933: INFO: Creating deployment "test-rolling-update-deployment" Jan 7 10:50:19.940: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 7 10:50:19.957: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 7 10:50:21.993: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 7 10:50:21.999: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991020, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991020, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991020, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991019, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 10:50:24.033: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991020, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991020, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991020, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991019, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 10:50:26.035: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991020, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991020, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991020, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991019, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 10:50:28.010: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991020, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991020, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991020, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991019, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 7 10:50:30.019: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 7 10:50:30.041: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-42dsz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-42dsz/deployments/test-rolling-update-deployment,UID:7f73f844-313b-11ea-a994-fa163e34d433,ResourceVersion:17464058,Generation:1,CreationTimestamp:2020-01-07 10:50:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-07 10:50:20 +0000 UTC 2020-01-07 10:50:20 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-07 10:50:29 +0000 UTC 2020-01-07 10:50:19 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 7 10:50:30.046: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-42dsz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-42dsz/replicasets/test-rolling-update-deployment-75db98fb4c,UID:7f78cc73-313b-11ea-a994-fa163e34d433,ResourceVersion:17464049,Generation:1,CreationTimestamp:2020-01-07 10:50:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 7f73f844-313b-11ea-a994-fa163e34d433 0xc001658517 0xc001658518}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 7 10:50:30.046: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 7 10:50:30.046: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-42dsz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-42dsz/replicasets/test-rolling-update-controller,UID:79788b58-313b-11ea-a994-fa163e34d433,ResourceVersion:17464057,Generation:2,CreationTimestamp:2020-01-07 10:50:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 7f73f844-313b-11ea-a994-fa163e34d433 0xc00165843f 0xc001658450}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 7 10:50:30.052: INFO: Pod "test-rolling-update-deployment-75db98fb4c-cwjrd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-cwjrd,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-42dsz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-42dsz/pods/test-rolling-update-deployment-75db98fb4c-cwjrd,UID:7f7e00d0-313b-11ea-a994-fa163e34d433,ResourceVersion:17464048,Generation:0,CreationTimestamp:2020-01-07 10:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 7f78cc73-313b-11ea-a994-fa163e34d433 0xc001aca217 0xc001aca218}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9qdz4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qdz4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-9qdz4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001aca280} {node.kubernetes.io/unreachable Exists NoExecute 0xc001aca2a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:20 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-07 10:50:20 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-07 10:50:28 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://d2d39a0bd51d7c880a7899360424218ba2a3ba8fe0e7419c0bbf2cf33925a674}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 7 10:50:30.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-42dsz" for this suite. Jan 7 10:50:38.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 10:50:38.888: INFO: namespace: e2e-tests-deployment-42dsz, resource: bindings, ignored listing per whitelist Jan 7 10:50:38.904: INFO: namespace e2e-tests-deployment-42dsz deletion completed in 8.842243986s • [SLOW TEST:29.174 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 7 10:50:38.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 7 10:50:39.225: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-jdk8g,SelfLink:/api/v1/namespaces/e2e-tests-watch-jdk8g/configmaps/e2e-watch-test-watch-closed,UID:8af06d59-313b-11ea-a994-fa163e34d433,ResourceVersion:17464100,Generation:0,CreationTimestamp:2020-01-07 10:50:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 7 10:50:39.226: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-jdk8g,SelfLink:/api/v1/namespaces/e2e-tests-watch-jdk8g/configmaps/e2e-watch-test-watch-closed,UID:8af06d59-313b-11ea-a994-fa163e34d433,ResourceVersion:17464101,Generation:0,CreationTimestamp:2020-01-07 10:50:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 7 10:50:39.387: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-jdk8g,SelfLink:/api/v1/namespaces/e2e-tests-watch-jdk8g/configmaps/e2e-watch-test-watch-closed,UID:8af06d59-313b-11ea-a994-fa163e34d433,ResourceVersion:17464103,Generation:0,CreationTimestamp:2020-01-07 10:50:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 7 10:50:39.387: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-jdk8g,SelfLink:/api/v1/namespaces/e2e-tests-watch-jdk8g/configmaps/e2e-watch-test-watch-closed,UID:8af06d59-313b-11ea-a994-fa163e34d433,ResourceVersion:17464104,Generation:0,CreationTimestamp:2020-01-07 10:50:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 7 10:50:39.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-jdk8g" for this suite. Jan 7 10:50:45.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 10:50:45.589: INFO: namespace: e2e-tests-watch-jdk8g, resource: bindings, ignored listing per whitelist Jan 7 10:50:45.730: INFO: namespace e2e-tests-watch-jdk8g deletion completed in 6.319915997s • [SLOW TEST:6.826 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 7 10:50:45.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-qpvs5 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-qpvs5 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-qpvs5 Jan 7 10:50:45.963: INFO: Found 0 stateful pods, waiting for 1 Jan 7 10:50:55.983: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 7 10:50:55.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 7 10:50:56.994: INFO: stderr: "I0107 10:50:56.265282 259 log.go:172] (0xc00070e370) (0xc00072e640) Create stream\nI0107 10:50:56.265781 259 log.go:172] (0xc00070e370) (0xc00072e640) Stream added, broadcasting: 1\nI0107 10:50:56.302438 259 log.go:172] (0xc00070e370) Reply frame received for 1\nI0107 10:50:56.302690 259 log.go:172] (0xc00070e370) (0xc00065ed20) Create stream\nI0107 10:50:56.302735 259 log.go:172] (0xc00070e370) (0xc00065ed20) Stream added, broadcasting: 3\nI0107 10:50:56.305260 259 log.go:172] (0xc00070e370) Reply frame received for 3\nI0107 10:50:56.305310 259 log.go:172] (0xc00070e370) (0xc00072e6e0) Create stream\nI0107 10:50:56.305324 259 log.go:172] (0xc00070e370) (0xc00072e6e0) Stream added, broadcasting: 5\nI0107 10:50:56.309050 259 log.go:172] (0xc00070e370) Reply frame received for 5\nI0107 10:50:56.784951 259 log.go:172] (0xc00070e370) Data frame received for 3\nI0107 10:50:56.785197 259 log.go:172] (0xc00065ed20) (3) Data frame handling\nI0107 10:50:56.785243 259 log.go:172] (0xc00065ed20) (3) Data frame sent\nI0107 10:50:56.973016 259 log.go:172] (0xc00070e370) (0xc00072e6e0) Stream removed, broadcasting: 5\nI0107 10:50:56.973445 259 log.go:172] (0xc00070e370) Data frame received for 1\nI0107 10:50:56.973632 259 log.go:172] (0xc00070e370) (0xc00065ed20) Stream removed, broadcasting: 3\nI0107 10:50:56.973918 259 log.go:172] (0xc00072e640) (1) Data frame handling\nI0107 10:50:56.973956 259 log.go:172] (0xc00072e640) (1) Data frame sent\nI0107 10:50:56.973969 259 log.go:172] (0xc00070e370) (0xc00072e640) Stream removed, broadcasting: 1\nI0107 10:50:56.974004 259 log.go:172] (0xc00070e370) Go away received\nI0107 10:50:56.976171 259 log.go:172] (0xc00070e370) (0xc00072e640) Stream removed, broadcasting: 1\nI0107 10:50:56.976224 259 log.go:172] (0xc00070e370) (0xc00065ed20) Stream removed, broadcasting: 3\nI0107 10:50:56.976249 259 log.go:172] (0xc00070e370) (0xc00072e6e0) Stream removed, broadcasting: 5\n" Jan 7 10:50:56.994: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 7 10:50:56.994: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 7 10:50:57.133: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 7 10:50:57.134: INFO: Waiting for statefulset status.replicas updated to 0 Jan 7 10:50:57.166: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 10:50:57.166: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC }] Jan 7 10:50:57.166: INFO: Jan 7 10:50:57.166: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 7 10:50:58.609: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988655858s Jan 7 10:50:59.633: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.545163217s Jan 7 10:51:00.666: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.521945263s Jan 7 10:51:01.683: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.489076498s Jan 7 10:51:03.418: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.471614289s Jan 7 10:51:05.023: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.736712431s Jan 7 10:51:06.230: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.131483274s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-qpvs5 Jan 7 10:51:07.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:51:08.137: INFO: stderr: "I0107 10:51:07.658731 280 log.go:172] (0xc0001646e0) (0xc0007c2640) Create stream\nI0107 10:51:07.659011 280 log.go:172] (0xc0001646e0) (0xc0007c2640) Stream added, broadcasting: 1\nI0107 10:51:07.669883 280 log.go:172] (0xc0001646e0) Reply frame received for 1\nI0107 10:51:07.669925 280 log.go:172] (0xc0001646e0) (0xc000696dc0) Create stream\nI0107 10:51:07.669935 280 log.go:172] (0xc0001646e0) (0xc000696dc0) Stream added, broadcasting: 3\nI0107 10:51:07.671174 280 log.go:172] (0xc0001646e0) Reply frame received for 3\nI0107 10:51:07.671210 280 log.go:172] (0xc0001646e0) (0xc0007c26e0) Create stream\nI0107 10:51:07.671227 280 log.go:172] (0xc0001646e0) (0xc0007c26e0) Stream added, broadcasting: 5\nI0107 10:51:07.672248 280 log.go:172] (0xc0001646e0) Reply frame received for 5\nI0107 10:51:07.913215 280 log.go:172] (0xc0001646e0) Data frame received for 3\nI0107 10:51:07.913351 280 log.go:172] (0xc000696dc0) (3) Data frame handling\nI0107 10:51:07.913382 280 log.go:172] (0xc000696dc0) (3) Data frame sent\nI0107 10:51:08.126535 280 log.go:172] (0xc0001646e0) (0xc000696dc0) Stream removed, broadcasting: 3\nI0107 10:51:08.126780 280 log.go:172] (0xc0001646e0) Data frame received for 1\nI0107 10:51:08.126849 280 log.go:172] (0xc0001646e0) (0xc0007c26e0) Stream removed, broadcasting: 5\nI0107 10:51:08.126880 280 log.go:172] (0xc0007c2640) (1) Data frame handling\nI0107 10:51:08.126918 280 log.go:172] (0xc0007c2640) (1) Data frame sent\nI0107 10:51:08.126924 280 log.go:172] (0xc0001646e0) (0xc0007c2640) Stream removed, broadcasting: 1\nI0107 10:51:08.126938 280 log.go:172] (0xc0001646e0) Go away received\nI0107 10:51:08.127846 280 log.go:172] (0xc0001646e0) (0xc0007c2640) Stream removed, broadcasting: 1\nI0107 10:51:08.127867 280 log.go:172] (0xc0001646e0) (0xc000696dc0) Stream removed, broadcasting: 3\nI0107 10:51:08.127880 280 log.go:172] (0xc0001646e0) (0xc0007c26e0) Stream removed, broadcasting: 5\n" Jan 7 10:51:08.137: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 7 10:51:08.137: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 7 10:51:08.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:51:08.411: INFO: rc: 1 Jan 7 10:51:08.411: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc00113f410 exit status 1 true [0xc0019d0a60 0xc0019d0a98 0xc0019d0ae0] [0xc0019d0a60 0xc0019d0a98 0xc0019d0ae0] [0xc0019d0a88 0xc0019d0ac0] [0x935700 0x935700] 0xc001403b60 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 7 10:51:18.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:51:19.525: INFO: stderr: "I0107 10:51:18.930170 324 log.go:172] (0xc0008842c0) (0xc000768640) Create stream\nI0107 10:51:18.930456 324 log.go:172] (0xc0008842c0) (0xc000768640) Stream added, broadcasting: 1\nI0107 10:51:18.952455 324 log.go:172] (0xc0008842c0) Reply frame received for 1\nI0107 10:51:18.952605 324 log.go:172] (0xc0008842c0) (0xc0006fadc0) Create stream\nI0107 10:51:18.952624 324 log.go:172] (0xc0008842c0) (0xc0006fadc0) Stream added, broadcasting: 3\nI0107 10:51:18.969129 324 log.go:172] (0xc0008842c0) Reply frame received for 3\nI0107 10:51:18.969267 324 log.go:172] (0xc0008842c0) (0xc0007686e0) Create stream\nI0107 10:51:18.969292 324 log.go:172] (0xc0008842c0) (0xc0007686e0) Stream added, broadcasting: 5\nI0107 10:51:18.974424 324 log.go:172] (0xc0008842c0) Reply frame received for 5\nI0107 10:51:19.250192 324 log.go:172] (0xc0008842c0) Data frame received for 5\nI0107 10:51:19.250410 324 log.go:172] (0xc0007686e0) (5) Data frame handling\nI0107 10:51:19.250445 324 log.go:172] (0xc0007686e0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0107 10:51:19.250489 324 log.go:172] (0xc0008842c0) Data frame received for 3\nI0107 10:51:19.250502 324 log.go:172] (0xc0006fadc0) (3) Data frame handling\nI0107 10:51:19.250514 324 log.go:172] (0xc0006fadc0) (3) Data frame sent\nI0107 10:51:19.500968 324 log.go:172] (0xc0008842c0) (0xc0006fadc0) Stream removed, broadcasting: 3\nI0107 10:51:19.501285 324 log.go:172] (0xc0008842c0) Data frame received for 1\nI0107 10:51:19.501409 324 log.go:172] (0xc0008842c0) (0xc0007686e0) Stream removed, broadcasting: 5\nI0107 10:51:19.501512 324 log.go:172] (0xc000768640) (1) Data frame handling\nI0107 10:51:19.501542 324 log.go:172] (0xc000768640) (1) Data frame sent\nI0107 10:51:19.501549 324 log.go:172] (0xc0008842c0) (0xc000768640) Stream removed, broadcasting: 1\nI0107 10:51:19.501571 324 log.go:172] (0xc0008842c0) Go away received\nI0107 10:51:19.502918 324 log.go:172] (0xc0008842c0) (0xc000768640) Stream removed, broadcasting: 1\nI0107 10:51:19.502929 324 log.go:172] (0xc0008842c0) (0xc0006fadc0) Stream removed, broadcasting: 3\nI0107 10:51:19.502943 324 log.go:172] (0xc0008842c0) (0xc0007686e0) Stream removed, broadcasting: 5\n" Jan 7 10:51:19.525: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 7 10:51:19.525: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 7 10:51:19.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:51:19.936: INFO: stderr: "I0107 10:51:19.693567 346 log.go:172] (0xc0006ec2c0) (0xc00071c640) Create stream\nI0107 10:51:19.694035 346 log.go:172] (0xc0006ec2c0) (0xc00071c640) Stream added, broadcasting: 1\nI0107 10:51:19.698626 346 log.go:172] (0xc0006ec2c0) Reply frame received for 1\nI0107 10:51:19.698725 346 log.go:172] (0xc0006ec2c0) (0xc000698dc0) Create stream\nI0107 10:51:19.698760 346 log.go:172] (0xc0006ec2c0) (0xc000698dc0) Stream added, broadcasting: 3\nI0107 10:51:19.701001 346 log.go:172] (0xc0006ec2c0) Reply frame received for 3\nI0107 10:51:19.701044 346 log.go:172] (0xc0006ec2c0) (0xc000678000) Create stream\nI0107 10:51:19.701055 346 log.go:172] (0xc0006ec2c0) (0xc000678000) Stream added, broadcasting: 5\nI0107 10:51:19.702002 346 log.go:172] (0xc0006ec2c0) Reply frame received for 5\nI0107 10:51:19.816914 346 log.go:172] (0xc0006ec2c0) Data frame received for 5\nI0107 10:51:19.817328 346 log.go:172] (0xc000678000) (5) Data frame handling\nI0107 10:51:19.817716 346 log.go:172] (0xc000678000) (5) Data frame sent\nI0107 10:51:19.817762 346 log.go:172] (0xc0006ec2c0) Data frame received for 3\nI0107 10:51:19.817772 346 log.go:172] (0xc000698dc0) (3) Data frame handling\nI0107 10:51:19.817808 346 log.go:172] (0xc000698dc0) (3) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0107 10:51:19.925181 346 log.go:172] (0xc0006ec2c0) Data frame received for 1\nI0107 10:51:19.925318 346 log.go:172] (0xc0006ec2c0) (0xc000698dc0) Stream removed, broadcasting: 3\nI0107 10:51:19.925366 346 log.go:172] (0xc00071c640) (1) Data frame handling\nI0107 10:51:19.925374 346 log.go:172] (0xc00071c640) (1) Data frame sent\nI0107 10:51:19.925379 346 log.go:172] (0xc0006ec2c0) (0xc00071c640) Stream removed, broadcasting: 1\nI0107 10:51:19.925938 346 log.go:172] (0xc0006ec2c0) (0xc000678000) Stream removed, broadcasting: 5\nI0107 10:51:19.925970 346 log.go:172] (0xc0006ec2c0) (0xc00071c640) Stream removed, broadcasting: 1\nI0107 10:51:19.925984 346 log.go:172] (0xc0006ec2c0) (0xc000698dc0) Stream removed, broadcasting: 3\nI0107 10:51:19.925995 346 log.go:172] (0xc0006ec2c0) (0xc000678000) Stream removed, broadcasting: 5\nI0107 10:51:19.926327 346 log.go:172] (0xc0006ec2c0) Go away received\n" Jan 7 10:51:19.937: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 7 10:51:19.937: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 7 10:51:19.958: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 7 10:51:19.958: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 7 10:51:19.958: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 7 10:51:19.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 7 10:51:20.380: INFO: stderr: "I0107 10:51:20.145410 367 log.go:172] (0xc0006e2370) (0xc000702640) Create stream\nI0107 10:51:20.145510 367 log.go:172] (0xc0006e2370) (0xc000702640) Stream added, broadcasting: 1\nI0107 10:51:20.149643 367 log.go:172] (0xc0006e2370) Reply frame received for 1\nI0107 10:51:20.149739 367 log.go:172] (0xc0006e2370) (0xc0007026e0) Create stream\nI0107 10:51:20.149749 367 log.go:172] (0xc0006e2370) (0xc0007026e0) Stream added, broadcasting: 3\nI0107 10:51:20.150994 367 log.go:172] (0xc0006e2370) Reply frame received for 3\nI0107 10:51:20.151025 367 log.go:172] (0xc0006e2370) (0xc0007a2dc0) Create stream\nI0107 10:51:20.151036 367 log.go:172] (0xc0006e2370) (0xc0007a2dc0) Stream added, broadcasting: 5\nI0107 10:51:20.152479 367 log.go:172] (0xc0006e2370) Reply frame received for 5\nI0107 10:51:20.244719 367 log.go:172] (0xc0006e2370) Data frame received for 3\nI0107 10:51:20.244813 367 log.go:172] (0xc0007026e0) (3) Data frame handling\nI0107 10:51:20.244854 367 log.go:172] (0xc0007026e0) (3) Data frame sent\nI0107 10:51:20.370746 367 log.go:172] (0xc0006e2370) Data frame received for 1\nI0107 10:51:20.371195 367 log.go:172] (0xc0006e2370) (0xc0007026e0) Stream removed, broadcasting: 3\nI0107 10:51:20.371324 367 log.go:172] (0xc000702640) (1) Data frame handling\nI0107 10:51:20.371391 367 log.go:172] (0xc000702640) (1) Data frame sent\nI0107 10:51:20.371599 367 log.go:172] (0xc0006e2370) (0xc000702640) Stream removed, broadcasting: 1\nI0107 10:51:20.371791 367 log.go:172] (0xc0006e2370) (0xc0007a2dc0) Stream removed, broadcasting: 5\nI0107 10:51:20.371821 367 log.go:172] (0xc0006e2370) Go away received\nI0107 10:51:20.372906 367 log.go:172] (0xc0006e2370) (0xc000702640) Stream removed, broadcasting: 1\nI0107 10:51:20.372966 367 log.go:172] (0xc0006e2370) (0xc0007026e0) Stream removed, broadcasting: 3\nI0107 10:51:20.372974 367 log.go:172] (0xc0006e2370) (0xc0007a2dc0) Stream removed, broadcasting: 5\n" Jan 7 10:51:20.381: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 7 10:51:20.381: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 7 10:51:20.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 7 10:51:20.907: INFO: stderr: "I0107 10:51:20.567598 389 log.go:172] (0xc000704370) (0xc000724640) Create stream\nI0107 10:51:20.567925 389 log.go:172] (0xc000704370) (0xc000724640) Stream added, broadcasting: 1\nI0107 10:51:20.573834 389 log.go:172] (0xc000704370) Reply frame received for 1\nI0107 10:51:20.573877 389 log.go:172] (0xc000704370) (0xc00059cd20) Create stream\nI0107 10:51:20.573899 389 log.go:172] (0xc000704370) (0xc00059cd20) Stream added, broadcasting: 3\nI0107 10:51:20.574623 389 log.go:172] (0xc000704370) Reply frame received for 3\nI0107 10:51:20.574665 389 log.go:172] (0xc000704370) (0xc00054e000) Create stream\nI0107 10:51:20.574676 389 log.go:172] (0xc000704370) (0xc00054e000) Stream added, broadcasting: 5\nI0107 10:51:20.575470 389 log.go:172] (0xc000704370) Reply frame received for 5\nI0107 10:51:20.724148 389 log.go:172] (0xc000704370) Data frame received for 3\nI0107 10:51:20.724448 389 log.go:172] (0xc00059cd20) (3) Data frame handling\nI0107 10:51:20.724499 389 log.go:172] (0xc00059cd20) (3) Data frame sent\nI0107 10:51:20.894099 389 log.go:172] (0xc000704370) Data frame received for 1\nI0107 10:51:20.894476 389 log.go:172] (0xc000724640) (1) Data frame handling\nI0107 10:51:20.894600 389 log.go:172] (0xc000724640) (1) Data frame sent\nI0107 10:51:20.895760 389 log.go:172] (0xc000704370) (0xc000724640) Stream removed, broadcasting: 1\nI0107 10:51:20.896150 389 log.go:172] (0xc000704370) (0xc00059cd20) Stream removed, broadcasting: 3\nI0107 10:51:20.896234 389 log.go:172] (0xc000704370) (0xc00054e000) Stream removed, broadcasting: 5\nI0107 10:51:20.896271 389 log.go:172] (0xc000704370) Go away received\nI0107 10:51:20.896498 389 log.go:172] (0xc000704370) (0xc000724640) Stream removed, broadcasting: 1\nI0107 10:51:20.896514 389 log.go:172] (0xc000704370) (0xc00059cd20) Stream removed, broadcasting: 3\nI0107 10:51:20.896524 389 log.go:172] (0xc000704370) (0xc00054e000) Stream removed, broadcasting: 5\n" Jan 7 10:51:20.907: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 7 10:51:20.907: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 7 10:51:20.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 7 10:51:21.536: INFO: stderr: "I0107 10:51:21.130688 411 log.go:172] (0xc0005c62c0) (0xc0008b88c0) Create stream\nI0107 10:51:21.131032 411 log.go:172] (0xc0005c62c0) (0xc0008b88c0) Stream added, broadcasting: 1\nI0107 10:51:21.138625 411 log.go:172] (0xc0005c62c0) Reply frame received for 1\nI0107 10:51:21.138721 411 log.go:172] (0xc0005c62c0) (0xc000218960) Create stream\nI0107 10:51:21.138727 411 log.go:172] (0xc0005c62c0) (0xc000218960) Stream added, broadcasting: 3\nI0107 10:51:21.140445 411 log.go:172] (0xc0005c62c0) Reply frame received for 3\nI0107 10:51:21.140548 411 log.go:172] (0xc0005c62c0) (0xc0008b8000) Create stream\nI0107 10:51:21.140565 411 log.go:172] (0xc0005c62c0) (0xc0008b8000) Stream added, broadcasting: 5\nI0107 10:51:21.142296 411 log.go:172] (0xc0005c62c0) Reply frame received for 5\nI0107 10:51:21.345762 411 log.go:172] (0xc0005c62c0) Data frame received for 3\nI0107 10:51:21.345854 411 log.go:172] (0xc000218960) (3) Data frame handling\nI0107 10:51:21.345875 411 log.go:172] (0xc000218960) (3) Data frame sent\nI0107 10:51:21.516735 411 log.go:172] (0xc0005c62c0) Data frame received for 1\nI0107 10:51:21.516849 411 log.go:172] (0xc0008b88c0) (1) Data frame handling\nI0107 10:51:21.516888 411 log.go:172] (0xc0008b88c0) (1) Data frame sent\nI0107 10:51:21.523811 411 log.go:172] (0xc0005c62c0) (0xc0008b88c0) Stream removed, broadcasting: 1\nI0107 10:51:21.524212 411 log.go:172] (0xc0005c62c0) (0xc000218960) Stream removed, broadcasting: 3\nI0107 10:51:21.524287 411 log.go:172] (0xc0005c62c0) (0xc0008b8000) Stream removed, broadcasting: 5\nI0107 10:51:21.524313 411 log.go:172] (0xc0005c62c0) Go away received\nI0107 10:51:21.524590 411 log.go:172] (0xc0005c62c0) (0xc0008b88c0) Stream removed, broadcasting: 1\nI0107 10:51:21.524601 411 log.go:172] (0xc0005c62c0) (0xc000218960) Stream removed, broadcasting: 3\nI0107 10:51:21.524606 411 log.go:172] (0xc0005c62c0) (0xc0008b8000) Stream removed, broadcasting: 5\n" Jan 7 10:51:21.536: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 7 10:51:21.536: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 7 10:51:21.536: INFO: Waiting for statefulset status.replicas updated to 0 Jan 7 10:51:21.583: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 7 10:51:21.583: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 7 10:51:21.583: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 7 10:51:21.609: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 10:51:21.609: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC }] Jan 7 10:51:21.609: INFO: ss-1 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC }] Jan 7 10:51:21.609: INFO: ss-2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC }] Jan 7 10:51:21.609: INFO: Jan 7 10:51:21.609: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 7 10:51:23.670: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 10:51:23.670: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC }] Jan 7 10:51:23.671: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC }] Jan 7 10:51:23.671: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC }] Jan 7 10:51:23.671: INFO: Jan 7 10:51:23.671: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 7 10:51:24.704: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 10:51:24.705: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC }] Jan 7 10:51:24.705: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC }] Jan 7 10:51:24.705: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC }] Jan 7 10:51:24.705: INFO: Jan 7 10:51:24.705: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 7 10:51:25.717: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 10:51:25.717: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC }] Jan 7 10:51:25.717: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC }] Jan 7 10:51:25.717: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC }] Jan 7 10:51:25.717: INFO: Jan 7 10:51:25.717: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 7 10:51:26.996: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 10:51:26.996: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC }] Jan 7 10:51:26.997: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC }] Jan 7 10:51:26.997: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC }] Jan 7 10:51:26.997: INFO: Jan 7 10:51:26.997: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 7 10:51:28.013: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 10:51:28.014: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC }] Jan 7 10:51:28.014: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC }] Jan 7 10:51:28.014: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC }] Jan 7 10:51:28.014: INFO: Jan 7 10:51:28.014: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 7 10:51:29.552: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 10:51:29.553: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC }] Jan 7 10:51:29.553: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC }] Jan 7 10:51:29.553: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC }] Jan 7 10:51:29.553: INFO: Jan 7 10:51:29.553: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 7 10:51:30.595: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 10:51:30.595: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC }] Jan 7 10:51:30.596: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC }] Jan 7 10:51:30.596: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC }] Jan 7 10:51:30.596: INFO: Jan 7 10:51:30.596: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 7 10:51:31.624: INFO: POD NODE PHASE GRACE CONDITIONS Jan 7 10:51:31.624: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:46 +0000 UTC }] Jan 7 10:51:31.624: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC }] Jan 7 10:51:31.624: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:51:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 10:50:57 +0000 UTC }] Jan 7 10:51:31.624: INFO: Jan 7 10:51:31.624: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-qpvs5 Jan 7 10:51:32.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:51:32.823: INFO: rc: 1 Jan 7 10:51:32.824: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc00163d4a0 exit status 1 true [0xc0019d0cf8 0xc0019d0d48 0xc0019d0d88] [0xc0019d0cf8 0xc0019d0d48 0xc0019d0d88] [0xc0019d0d38 0xc0019d0d70] [0x935700 0x935700] 0xc001af8840 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 7 10:51:42.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:51:43.028: INFO: rc: 1 Jan 7 10:51:43.028: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00163d5f0 exit status 1 true [0xc0019d0d98 0xc0019d0e00 0xc0019d0e40] [0xc0019d0d98 0xc0019d0e00 0xc0019d0e40] [0xc0019d0dc8 0xc0019d0e30] [0x935700 0x935700] 0xc001af8ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:51:53.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:51:53.232: INFO: rc: 1 Jan 7 10:51:53.232: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00103e5a0 exit status 1 true [0xc000dae558 0xc000dae570 0xc000dae5a8] [0xc000dae558 0xc000dae570 0xc000dae5a8] [0xc000dae568 0xc000dae5a0] [0x935700 0x935700] 0xc00191a960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:52:03.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:52:03.409: INFO: rc: 1 Jan 7 10:52:03.409: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000dc3530 exit status 1 true [0xc00000fef8 0xc00000ff10 0xc00000ff28] [0xc00000fef8 0xc00000ff10 0xc00000ff28] [0xc00000ff08 0xc00000ff20] [0x935700 0x935700] 0xc00215e360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:52:13.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:52:13.639: INFO: rc: 1 Jan 7 10:52:13.639: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000dc3650 exit status 1 true [0xc00000ff30 0xc00000ff90 0xc00000ffe0] [0xc00000ff30 0xc00000ff90 0xc00000ffe0] [0xc00000ff78 0xc00000ffb0] [0x935700 0x935700] 0xc00215faa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:52:23.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:52:23.814: INFO: rc: 1 Jan 7 10:52:23.815: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020f2c00 exit status 1 true [0xc001ae6438 0xc001ae6450 0xc001ae6468] [0xc001ae6438 0xc001ae6450 0xc001ae6468] [0xc001ae6448 0xc001ae6460] [0x935700 0x935700] 0xc0019a3b00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:52:33.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:52:33.961: INFO: rc: 1 Jan 7 10:52:33.962: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00103e720 exit status 1 true [0xc000dae5b0 0xc000dae5c8 0xc000dae5e0] [0xc000dae5b0 0xc000dae5c8 0xc000dae5e0] [0xc000dae5c0 0xc000dae5d8] [0x935700 0x935700] 0xc00191b020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:52:43.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:52:44.117: INFO: rc: 1 Jan 7 10:52:44.118: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015f4120 exit status 1 true [0xc000d1e008 0xc000d1e020 0xc000d1e038] [0xc000d1e008 0xc000d1e020 0xc000d1e038] [0xc000d1e018 0xc000d1e030] [0x935700 0x935700] 0xc0014441e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:52:54.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:52:54.240: INFO: rc: 1 Jan 7 10:52:54.241: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00113e120 exit status 1 true [0xc000b08000 0xc000b08018 0xc000b08030] [0xc000b08000 0xc000b08018 0xc000b08030] [0xc000b08010 0xc000b08028] [0x935700 0x935700] 0xc001ae4840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:53:04.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:53:04.376: INFO: rc: 1 Jan 7 10:53:04.376: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00113e240 exit status 1 true [0xc000b08038 0xc000b08050 0xc000b08068] [0xc000b08038 0xc000b08050 0xc000b08068] [0xc000b08048 0xc000b08060] [0x935700 0x935700] 0xc001ae4ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:53:14.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:53:14.576: INFO: rc: 1 Jan 7 10:53:14.577: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00113e360 exit status 1 true [0xc000b08070 0xc000b08088 0xc000b080a0] [0xc000b08070 0xc000b08088 0xc000b080a0] [0xc000b08080 0xc000b08098] [0x935700 0x935700] 0xc001ae4ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:53:24.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:53:24.859: INFO: rc: 1 Jan 7 10:53:24.860: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00113e4b0 exit status 1 true [0xc000b080a8 0xc000b080c0 0xc000b080d8] [0xc000b080a8 0xc000b080c0 0xc000b080d8] [0xc000b080b8 0xc000b080d0] [0x935700 0x935700] 0xc001ae5200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:53:34.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:53:35.041: INFO: rc: 1 Jan 7 10:53:35.041: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000a44180 exit status 1 true [0xc002260000 0xc002260018 0xc002260030] [0xc002260000 0xc002260018 0xc002260030] [0xc002260010 0xc002260028] [0x935700 0x935700] 0xc0014025a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:53:45.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:53:45.716: INFO: rc: 1 Jan 7 10:53:45.717: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00113e5d0 exit status 1 true [0xc000b080e0 0xc000b080f8 0xc000b08110] [0xc000b080e0 0xc000b080f8 0xc000b08110] [0xc000b080f0 0xc000b08108] [0x935700 0x935700] 0xc001ae54a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:53:55.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:53:55.957: INFO: rc: 1 Jan 7 10:53:55.957: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015f4330 exit status 1 true [0xc000d1e040 0xc000d1e058 0xc000d1e070] [0xc000d1e040 0xc000d1e058 0xc000d1e070] [0xc000d1e050 0xc000d1e068] [0x935700 0x935700] 0xc0014444e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:54:05.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:54:06.097: INFO: rc: 1 Jan 7 10:54:06.098: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000a44420 exit status 1 true [0xc002260038 0xc002260050 0xc002260068] [0xc002260038 0xc002260050 0xc002260068] [0xc002260048 0xc002260060] [0x935700 0x935700] 0xc001402ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:54:16.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:54:16.272: INFO: rc: 1 Jan 7 10:54:16.272: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000a445a0 exit status 1 true [0xc002260070 0xc002260088 0xc0022600a0] [0xc002260070 0xc002260088 0xc0022600a0] [0xc002260080 0xc002260098] [0x935700 0x935700] 0xc001402e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:54:26.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:54:26.451: INFO: rc: 1 Jan 7 10:54:26.451: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015f4450 exit status 1 true [0xc000d1e078 0xc000d1e090 0xc000d1e0a8] [0xc000d1e078 0xc000d1e090 0xc000d1e0a8] [0xc000d1e088 0xc000d1e0a0] [0x935700 0x935700] 0xc0014449c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:54:36.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:54:36.643: INFO: rc: 1 Jan 7 10:54:36.644: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00113e7e0 exit status 1 true [0xc000b08118 0xc000b08130 0xc000b08148] [0xc000b08118 0xc000b08130 0xc000b08148] [0xc000b08128 0xc000b08140] [0x935700 0x935700] 0xc001ae57a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:54:46.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:54:46.781: INFO: rc: 1 Jan 7 10:54:46.781: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c90120 exit status 1 true [0xc000d1e008 0xc000d1e020 0xc000d1e038] [0xc000d1e008 0xc000d1e020 0xc000d1e038] [0xc000d1e018 0xc000d1e030] [0x935700 0x935700] 0xc0014441e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:54:56.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:54:56.958: INFO: rc: 1 Jan 7 10:54:56.959: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c90240 exit status 1 true [0xc000d1e040 0xc000d1e058 0xc000d1e070] [0xc000d1e040 0xc000d1e058 0xc000d1e070] [0xc000d1e050 0xc000d1e068] [0x935700 0x935700] 0xc0014444e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:55:06.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:55:07.124: INFO: rc: 1 Jan 7 10:55:07.125: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c90360 exit status 1 true [0xc000d1e078 0xc000d1e090 0xc000d1e0a8] [0xc000d1e078 0xc000d1e090 0xc000d1e0a8] [0xc000d1e088 0xc000d1e0a0] [0x935700 0x935700] 0xc0014449c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:55:17.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:55:17.305: INFO: rc: 1 Jan 7 10:55:17.306: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00113e150 exit status 1 true [0xc000b08000 0xc000b08018 0xc000b08030] [0xc000b08000 0xc000b08018 0xc000b08030] [0xc000b08010 0xc000b08028] [0x935700 0x935700] 0xc001ae4840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:55:27.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:55:27.488: INFO: rc: 1 Jan 7 10:55:27.489: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00113e2a0 exit status 1 true [0xc000b08038 0xc000b08050 0xc000b08068] [0xc000b08038 0xc000b08050 0xc000b08068] [0xc000b08048 0xc000b08060] [0x935700 0x935700] 0xc001ae4ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:55:37.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:55:37.685: INFO: rc: 1 Jan 7 10:55:37.685: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c90480 exit status 1 true [0xc000d1e0b0 0xc000d1e0c8 0xc000d1e0e0] [0xc000d1e0b0 0xc000d1e0c8 0xc000d1e0e0] [0xc000d1e0c0 0xc000d1e0d8] [0x935700 0x935700] 0xc001444e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:55:47.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:55:47.971: INFO: rc: 1 Jan 7 10:55:47.972: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c905d0 exit status 1 true [0xc000d1e0e8 0xc000d1e100 0xc000d1e118] [0xc000d1e0e8 0xc000d1e100 0xc000d1e118] [0xc000d1e0f8 0xc000d1e110] [0x935700 0x935700] 0xc001445380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:55:57.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:55:58.172: INFO: rc: 1 Jan 7 10:55:58.172: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000a44210 exit status 1 true [0xc002260000 0xc002260018 0xc002260030] [0xc002260000 0xc002260018 0xc002260030] [0xc002260010 0xc002260028] [0x935700 0x935700] 0xc0014025a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:56:08.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:56:08.549: INFO: rc: 1 Jan 7 10:56:08.550: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015f4180 exit status 1 true [0xc001ae6000 0xc001ae6018 0xc001ae6030] [0xc001ae6000 0xc001ae6018 0xc001ae6030] [0xc001ae6010 0xc001ae6028] [0x935700 0x935700] 0xc0019b2360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:56:18.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:56:18.725: INFO: rc: 1 Jan 7 10:56:18.726: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c906f0 exit status 1 true [0xc000d1e120 0xc000d1e138 0xc000d1e150] [0xc000d1e120 0xc000d1e138 0xc000d1e150] [0xc000d1e130 0xc000d1e148] [0x935700 0x935700] 0xc001445620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:56:28.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:56:28.921: INFO: rc: 1 Jan 7 10:56:28.922: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015f43f0 exit status 1 true [0xc001ae6038 0xc001ae6050 0xc001ae6068] [0xc001ae6038 0xc001ae6050 0xc001ae6068] [0xc001ae6048 0xc001ae6060] [0x935700 0x935700] 0xc0019b2c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 7 10:56:38.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qpvs5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 7 10:56:39.126: INFO: rc: 1 Jan 7 10:56:39.127: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Jan 7 10:56:39.127: INFO: Scaling statefulset ss to 0 Jan 7 10:56:39.150: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 7 10:56:39.153: INFO: Deleting all statefulset in ns e2e-tests-statefulset-qpvs5 Jan 7 10:56:39.156: INFO: Scaling statefulset ss to 0 Jan 7 10:56:39.164: INFO: Waiting for statefulset status.replicas updated to 0 Jan 7 10:56:39.167: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 7 10:56:39.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-qpvs5" for this suite. Jan 7 10:56:47.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 10:56:47.454: INFO: namespace: e2e-tests-statefulset-qpvs5, resource: bindings, ignored listing per whitelist Jan 7 10:56:47.463: INFO: namespace e2e-tests-statefulset-qpvs5 deletion completed in 8.259172673s • [SLOW TEST:361.732 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 7 10:56:47.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 7 10:56:47.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Jan 7 10:56:47.859: INFO: stderr: "" Jan 7 10:56:47.859: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Jan 7 10:56:47.869: INFO: Not supported for server versions before "1.13.12" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 7 10:56:47.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4gxqt" for this suite. Jan 7 10:56:53.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 10:56:53.984: INFO: namespace: e2e-tests-kubectl-4gxqt, resource: bindings, ignored listing per whitelist Jan 7 10:56:54.130: INFO: namespace e2e-tests-kubectl-4gxqt deletion completed in 6.247179342s S [SKIPPING] [6.667 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 7 10:56:47.869: Not supported for server versions before "1.13.12" /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 7 10:56:54.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 7 10:56:54.361: INFO: Waiting up to 5m0s for pod "pod-6a84e719-313c-11ea-8b51-0242ac110005" in namespace "e2e-tests-emptydir-ckkg7" to be "success or failure" Jan 7 10:56:54.401: INFO: Pod "pod-6a84e719-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.705636ms Jan 7 10:56:56.421: INFO: Pod "pod-6a84e719-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059407089s Jan 7 10:56:58.532: INFO: Pod "pod-6a84e719-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170087874s Jan 7 10:57:00.720: INFO: Pod "pod-6a84e719-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.358127363s Jan 7 10:57:02.792: INFO: Pod "pod-6a84e719-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.42989459s Jan 7 10:57:04.815: INFO: Pod "pod-6a84e719-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.453488365s Jan 7 10:57:07.560: INFO: Pod "pod-6a84e719-313c-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.198616675s STEP: Saw pod success Jan 7 10:57:07.561: INFO: Pod "pod-6a84e719-313c-11ea-8b51-0242ac110005" satisfied condition "success or failure" Jan 7 10:57:07.584: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-6a84e719-313c-11ea-8b51-0242ac110005 container test-container: STEP: delete the pod Jan 7 10:57:08.181: INFO: Waiting for pod pod-6a84e719-313c-11ea-8b51-0242ac110005 to disappear Jan 7 10:57:08.196: INFO: Pod pod-6a84e719-313c-11ea-8b51-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 7 10:57:08.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ckkg7" for this suite. Jan 7 10:57:16.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 10:57:16.325: INFO: namespace: e2e-tests-emptydir-ckkg7, resource: bindings, ignored listing per whitelist Jan 7 10:57:16.489: INFO: namespace e2e-tests-emptydir-ckkg7 deletion completed in 8.2844889s • [SLOW TEST:22.358 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 7 10:57:16.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-77dca5b3-313c-11ea-8b51-0242ac110005 STEP: Creating a pod to test consume secrets Jan 7 10:57:16.754: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-77e34340-313c-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-477p7" to be "success or failure" Jan 7 10:57:16.769: INFO: Pod "pod-projected-secrets-77e34340-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.109962ms Jan 7 10:57:18.796: INFO: Pod "pod-projected-secrets-77e34340-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042224105s Jan 7 10:57:20.809: INFO: Pod "pod-projected-secrets-77e34340-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05554304s Jan 7 10:57:23.320: INFO: Pod "pod-projected-secrets-77e34340-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.566671365s Jan 7 10:57:25.337: INFO: Pod "pod-projected-secrets-77e34340-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.583155781s Jan 7 10:57:27.368: INFO: Pod "pod-projected-secrets-77e34340-313c-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.614309128s STEP: Saw pod success Jan 7 10:57:27.368: INFO: Pod "pod-projected-secrets-77e34340-313c-11ea-8b51-0242ac110005" satisfied condition "success or failure" Jan 7 10:57:27.400: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-77e34340-313c-11ea-8b51-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 7 10:57:27.666: INFO: Waiting for pod pod-projected-secrets-77e34340-313c-11ea-8b51-0242ac110005 to disappear Jan 7 10:57:27.742: INFO: Pod pod-projected-secrets-77e34340-313c-11ea-8b51-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 7 10:57:27.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-477p7" for this suite. Jan 7 10:57:35.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 10:57:36.062: INFO: namespace: e2e-tests-projected-477p7, resource: bindings, ignored listing per whitelist Jan 7 10:57:36.067: INFO: namespace e2e-tests-projected-477p7 deletion completed in 8.313968049s • [SLOW TEST:19.577 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 7 10:57:36.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-8381c378-313c-11ea-8b51-0242ac110005 STEP: Creating a pod to test consume secrets Jan 7 10:57:36.260: INFO: Waiting up to 5m0s for pod "pod-secrets-8382ec74-313c-11ea-8b51-0242ac110005" in namespace "e2e-tests-secrets-tg75g" to be "success or failure" Jan 7 10:57:36.265: INFO: Pod "pod-secrets-8382ec74-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085027ms Jan 7 10:57:38.299: INFO: Pod "pod-secrets-8382ec74-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038575391s Jan 7 10:57:40.322: INFO: Pod "pod-secrets-8382ec74-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061260816s Jan 7 10:57:42.640: INFO: Pod "pod-secrets-8382ec74-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.37939071s Jan 7 10:57:44.672: INFO: Pod "pod-secrets-8382ec74-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.411304178s Jan 7 10:57:46.704: INFO: Pod "pod-secrets-8382ec74-313c-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.442767959s STEP: Saw pod success Jan 7 10:57:46.704: INFO: Pod "pod-secrets-8382ec74-313c-11ea-8b51-0242ac110005" satisfied condition "success or failure" Jan 7 10:57:46.730: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-8382ec74-313c-11ea-8b51-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 7 10:57:47.015: INFO: Waiting for pod pod-secrets-8382ec74-313c-11ea-8b51-0242ac110005 to disappear Jan 7 10:57:47.097: INFO: Pod pod-secrets-8382ec74-313c-11ea-8b51-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 7 10:57:47.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tg75g" for this suite. Jan 7 10:57:53.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 7 10:57:53.678: INFO: namespace: e2e-tests-secrets-tg75g, resource: bindings, ignored listing per whitelist Jan 7 10:57:53.733: INFO: namespace e2e-tests-secrets-tg75g deletion completed in 6.602520138s • [SLOW TEST:17.665 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 7 10:57:53.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 7 10:57:54.103: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 48.259619ms)
Jan  7 10:57:54.231: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 127.796048ms)
Jan  7 10:57:54.245: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.588088ms)
Jan  7 10:57:54.255: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.008833ms)
Jan  7 10:57:54.263: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.866409ms)
Jan  7 10:57:54.270: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.765683ms)
Jan  7 10:57:54.276: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.933143ms)
Jan  7 10:57:54.279: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.722174ms)
Jan  7 10:57:54.287: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.579898ms)
Jan  7 10:57:54.302: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.661119ms)
Jan  7 10:57:54.310: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.310485ms)
Jan  7 10:57:54.315: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.400768ms)
Jan  7 10:57:54.320: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.421663ms)
Jan  7 10:57:54.326: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.466885ms)
Jan  7 10:57:54.332: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.821356ms)
Jan  7 10:57:54.338: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.336065ms)
Jan  7 10:57:54.342: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.603977ms)
Jan  7 10:57:54.347: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.582035ms)
Jan  7 10:57:54.351: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.057631ms)
Jan  7 10:57:54.355: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.363743ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 10:57:54.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-257j4" for this suite.
Jan  7 10:58:00.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 10:58:00.513: INFO: namespace: e2e-tests-proxy-257j4, resource: bindings, ignored listing per whitelist
Jan  7 10:58:00.660: INFO: namespace e2e-tests-proxy-257j4 deletion completed in 6.300300607s

• [SLOW TEST:6.927 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 10:58:00.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-923a5b48-313c-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  7 10:58:00.957: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-923b5112-313c-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-z2cqh" to be "success or failure"
Jan  7 10:58:00.982: INFO: Pod "pod-projected-configmaps-923b5112-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.048507ms
Jan  7 10:58:03.009: INFO: Pod "pod-projected-configmaps-923b5112-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051398455s
Jan  7 10:58:05.043: INFO: Pod "pod-projected-configmaps-923b5112-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085809701s
Jan  7 10:58:07.285: INFO: Pod "pod-projected-configmaps-923b5112-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.327896707s
Jan  7 10:58:09.312: INFO: Pod "pod-projected-configmaps-923b5112-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.35441613s
Jan  7 10:58:11.328: INFO: Pod "pod-projected-configmaps-923b5112-313c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.370831446s
Jan  7 10:58:13.622: INFO: Pod "pod-projected-configmaps-923b5112-313c-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.664385434s
STEP: Saw pod success
Jan  7 10:58:13.622: INFO: Pod "pod-projected-configmaps-923b5112-313c-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 10:58:13.634: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-923b5112-313c-11ea-8b51-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  7 10:58:14.190: INFO: Waiting for pod pod-projected-configmaps-923b5112-313c-11ea-8b51-0242ac110005 to disappear
Jan  7 10:58:14.198: INFO: Pod pod-projected-configmaps-923b5112-313c-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 10:58:14.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-z2cqh" for this suite.
Jan  7 10:58:20.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 10:58:20.423: INFO: namespace: e2e-tests-projected-z2cqh, resource: bindings, ignored listing per whitelist
Jan  7 10:58:20.711: INFO: namespace e2e-tests-projected-z2cqh deletion completed in 6.506256113s

• [SLOW TEST:20.050 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 10:58:20.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-pk478
Jan  7 10:58:31.120: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-pk478
STEP: checking the pod's current state and verifying that restartCount is present
Jan  7 10:58:31.126: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:02:33.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-pk478" for this suite.
Jan  7 11:02:41.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:02:41.225: INFO: namespace: e2e-tests-container-probe-pk478, resource: bindings, ignored listing per whitelist
Jan  7 11:02:41.322: INFO: namespace e2e-tests-container-probe-pk478 deletion completed in 8.274257346s

• [SLOW TEST:260.609 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:02:41.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jan  7 11:02:42.282: INFO: created pod pod-service-account-defaultsa
Jan  7 11:02:42.282: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan  7 11:02:42.316: INFO: created pod pod-service-account-mountsa
Jan  7 11:02:42.317: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan  7 11:02:42.426: INFO: created pod pod-service-account-nomountsa
Jan  7 11:02:42.426: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan  7 11:02:42.497: INFO: created pod pod-service-account-defaultsa-mountspec
Jan  7 11:02:42.498: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan  7 11:02:42.699: INFO: created pod pod-service-account-mountsa-mountspec
Jan  7 11:02:42.699: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan  7 11:02:42.763: INFO: created pod pod-service-account-nomountsa-mountspec
Jan  7 11:02:42.764: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan  7 11:02:42.878: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan  7 11:02:42.878: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan  7 11:02:42.920: INFO: created pod pod-service-account-mountsa-nomountspec
Jan  7 11:02:42.921: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan  7 11:02:44.014: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan  7 11:02:44.014: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:02:44.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-jtb6q" for this suite.
Jan  7 11:03:23.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:03:23.478: INFO: namespace: e2e-tests-svcaccounts-jtb6q, resource: bindings, ignored listing per whitelist
Jan  7 11:03:23.568: INFO: namespace e2e-tests-svcaccounts-jtb6q deletion completed in 38.664783268s

• [SLOW TEST:42.246 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:03:23.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  7 11:03:23.797: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan  7 11:03:29.275: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  7 11:03:33.304: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan  7 11:03:35.321: INFO: Creating deployment "test-rollover-deployment"
Jan  7 11:03:35.466: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan  7 11:03:37.496: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan  7 11:03:37.518: INFO: Ensure that both replica sets have 1 created replica
Jan  7 11:03:37.536: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan  7 11:03:37.833: INFO: Updating deployment test-rollover-deployment
Jan  7 11:03:37.833: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan  7 11:03:40.307: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan  7 11:03:40.316: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan  7 11:03:40.323: INFO: all replica sets need to contain the pod-template-hash label
Jan  7 11:03:40.323: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991820, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  7 11:03:42.354: INFO: all replica sets need to contain the pod-template-hash label
Jan  7 11:03:42.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991820, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  7 11:03:44.352: INFO: all replica sets need to contain the pod-template-hash label
Jan  7 11:03:44.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991820, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  7 11:03:47.913: INFO: all replica sets need to contain the pod-template-hash label
Jan  7 11:03:47.913: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991820, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  7 11:03:49.035: INFO: all replica sets need to contain the pod-template-hash label
Jan  7 11:03:49.036: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991820, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  7 11:03:50.351: INFO: all replica sets need to contain the pod-template-hash label
Jan  7 11:03:50.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991820, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  7 11:03:52.388: INFO: all replica sets need to contain the pod-template-hash label
Jan  7 11:03:52.389: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991831, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  7 11:03:54.394: INFO: all replica sets need to contain the pod-template-hash label
Jan  7 11:03:54.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991831, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  7 11:03:56.370: INFO: all replica sets need to contain the pod-template-hash label
Jan  7 11:03:56.370: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991831, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  7 11:03:58.347: INFO: all replica sets need to contain the pod-template-hash label
Jan  7 11:03:58.347: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991831, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  7 11:04:00.351: INFO: all replica sets need to contain the pod-template-hash label
Jan  7 11:04:00.351: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991831, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713991815, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  7 11:04:02.399: INFO: 
Jan  7 11:04:02.399: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  7 11:04:02.475: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-mglzr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mglzr/deployments/test-rollover-deployment,UID:598ac15c-313d-11ea-a994-fa163e34d433,ResourceVersion:17465506,Generation:2,CreationTimestamp:2020-01-07 11:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-07 11:03:35 +0000 UTC 2020-01-07 11:03:35 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-07 11:04:02 +0000 UTC 2020-01-07 11:03:35 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  7 11:04:03.396: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-mglzr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mglzr/replicasets/test-rollover-deployment-5b8479fdb6,UID:5b0b74cc-313d-11ea-a994-fa163e34d433,ResourceVersion:17465497,Generation:2,CreationTimestamp:2020-01-07 11:03:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 598ac15c-313d-11ea-a994-fa163e34d433 0xc0021e9e67 0xc0021e9e68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  7 11:04:03.396: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan  7 11:04:03.397: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-mglzr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mglzr/replicasets/test-rollover-controller,UID:52a8256f-313d-11ea-a994-fa163e34d433,ResourceVersion:17465505,Generation:2,CreationTimestamp:2020-01-07 11:03:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 598ac15c-313d-11ea-a994-fa163e34d433 0xc0021e9cd7 0xc0021e9cd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  7 11:04:03.397: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-mglzr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mglzr/replicasets/test-rollover-deployment-58494b7559,UID:59a842cd-313d-11ea-a994-fa163e34d433,ResourceVersion:17465460,Generation:2,CreationTimestamp:2020-01-07 11:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 598ac15c-313d-11ea-a994-fa163e34d433 0xc0021e9d97 0xc0021e9d98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  7 11:04:03.422: INFO: Pod "test-rollover-deployment-5b8479fdb6-72s5r" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-72s5r,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-mglzr,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mglzr/pods/test-rollover-deployment-5b8479fdb6-72s5r,UID:5c249a13-313d-11ea-a994-fa163e34d433,ResourceVersion:17465482,Generation:0,CreationTimestamp:2020-01-07 11:03:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 5b0b74cc-313d-11ea-a994-fa163e34d433 0xc00167ee67 0xc00167ee68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bsgxm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bsgxm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-bsgxm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00167eed0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00167eef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 11:03:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 11:03:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 11:03:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 11:03:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-07 11:03:40 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-07 11:03:50 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://e2a8a19ec60cff4fc45baac4a9c47ec82358ffaa291e1f22c6e2f6eddf7502da}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:04:03.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-mglzr" for this suite.
Jan  7 11:04:11.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:04:11.962: INFO: namespace: e2e-tests-deployment-mglzr, resource: bindings, ignored listing per whitelist
Jan  7 11:04:12.062: INFO: namespace e2e-tests-deployment-mglzr deletion completed in 8.625511533s

• [SLOW TEST:48.493 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:04:12.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  7 11:04:14.039: INFO: Waiting up to 5m0s for pod "downwardapi-volume-709661ad-313d-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-jcq85" to be "success or failure"
Jan  7 11:04:14.049: INFO: Pod "downwardapi-volume-709661ad-313d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.61159ms
Jan  7 11:04:16.205: INFO: Pod "downwardapi-volume-709661ad-313d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165774006s
Jan  7 11:04:18.249: INFO: Pod "downwardapi-volume-709661ad-313d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210159666s
Jan  7 11:04:20.275: INFO: Pod "downwardapi-volume-709661ad-313d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.236016203s
Jan  7 11:04:22.287: INFO: Pod "downwardapi-volume-709661ad-313d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.247863752s
Jan  7 11:04:25.515: INFO: Pod "downwardapi-volume-709661ad-313d-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.475629878s
STEP: Saw pod success
Jan  7 11:04:25.515: INFO: Pod "downwardapi-volume-709661ad-313d-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:04:25.976: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-709661ad-313d-11ea-8b51-0242ac110005 container client-container: 
STEP: delete the pod
Jan  7 11:04:26.111: INFO: Waiting for pod downwardapi-volume-709661ad-313d-11ea-8b51-0242ac110005 to disappear
Jan  7 11:04:26.120: INFO: Pod downwardapi-volume-709661ad-313d-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:04:26.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jcq85" for this suite.
Jan  7 11:04:32.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:04:32.335: INFO: namespace: e2e-tests-projected-jcq85, resource: bindings, ignored listing per whitelist
Jan  7 11:04:32.349: INFO: namespace e2e-tests-projected-jcq85 deletion completed in 6.217806297s

• [SLOW TEST:20.287 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:04:32.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  7 11:04:32.580: INFO: Waiting up to 5m0s for pod "pod-7b9baedd-313d-11ea-8b51-0242ac110005" in namespace "e2e-tests-emptydir-lddfk" to be "success or failure"
Jan  7 11:04:32.597: INFO: Pod "pod-7b9baedd-313d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.584237ms
Jan  7 11:04:34.624: INFO: Pod "pod-7b9baedd-313d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044081691s
Jan  7 11:04:36.636: INFO: Pod "pod-7b9baedd-313d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055827575s
Jan  7 11:04:38.679: INFO: Pod "pod-7b9baedd-313d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098742546s
Jan  7 11:04:40.705: INFO: Pod "pod-7b9baedd-313d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124922874s
Jan  7 11:04:42.731: INFO: Pod "pod-7b9baedd-313d-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.15050014s
STEP: Saw pod success
Jan  7 11:04:42.731: INFO: Pod "pod-7b9baedd-313d-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:04:42.736: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7b9baedd-313d-11ea-8b51-0242ac110005 container test-container: 
STEP: delete the pod
Jan  7 11:04:43.046: INFO: Waiting for pod pod-7b9baedd-313d-11ea-8b51-0242ac110005 to disappear
Jan  7 11:04:43.112: INFO: Pod pod-7b9baedd-313d-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:04:43.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-lddfk" for this suite.
Jan  7 11:04:49.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:04:49.242: INFO: namespace: e2e-tests-emptydir-lddfk, resource: bindings, ignored listing per whitelist
Jan  7 11:04:49.278: INFO: namespace e2e-tests-emptydir-lddfk deletion completed in 6.154524433s

• [SLOW TEST:16.928 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:04:49.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-j4gns
Jan  7 11:05:01.556: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-j4gns
STEP: checking the pod's current state and verifying that restartCount is present
Jan  7 11:05:01.563: INFO: Initial restart count of pod liveness-http is 0
Jan  7 11:05:22.224: INFO: Restart count of pod e2e-tests-container-probe-j4gns/liveness-http is now 1 (20.661107736s elapsed)
Jan  7 11:05:42.649: INFO: Restart count of pod e2e-tests-container-probe-j4gns/liveness-http is now 2 (41.085975162s elapsed)
Jan  7 11:06:03.170: INFO: Restart count of pod e2e-tests-container-probe-j4gns/liveness-http is now 3 (1m1.607277743s elapsed)
Jan  7 11:06:21.334: INFO: Restart count of pod e2e-tests-container-probe-j4gns/liveness-http is now 4 (1m19.771551579s elapsed)
Jan  7 11:07:26.267: INFO: Restart count of pod e2e-tests-container-probe-j4gns/liveness-http is now 5 (2m24.704293382s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:07:26.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-j4gns" for this suite.
Jan  7 11:07:32.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:07:32.517: INFO: namespace: e2e-tests-container-probe-j4gns, resource: bindings, ignored listing per whitelist
Jan  7 11:07:32.640: INFO: namespace e2e-tests-container-probe-j4gns deletion completed in 6.303228514s

• [SLOW TEST:163.362 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:07:32.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  7 11:07:32.845: INFO: Creating deployment "test-recreate-deployment"
Jan  7 11:07:32.862: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan  7 11:07:32.960: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan  7 11:07:35.032: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan  7 11:07:35.052: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992052, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  7 11:07:37.067: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992052, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  7 11:07:39.623: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992052, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  7 11:07:41.380: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992052, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  7 11:07:43.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992052, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  7 11:07:45.074: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan  7 11:07:45.109: INFO: Updating deployment test-recreate-deployment
Jan  7 11:07:45.109: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  7 11:07:45.696: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-v2425,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-v2425/deployments/test-recreate-deployment,UID:e71eb8e8-313d-11ea-a994-fa163e34d433,ResourceVersion:17465940,Generation:2,CreationTimestamp:2020-01-07 11:07:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-07 11:07:45 +0000 UTC 2020-01-07 11:07:45 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-07 11:07:45 +0000 UTC 2020-01-07 11:07:32 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan  7 11:07:45.713: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-v2425,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-v2425/replicasets/test-recreate-deployment-589c4bfd,UID:ee95722e-313d-11ea-a994-fa163e34d433,ResourceVersion:17465938,Generation:1,CreationTimestamp:2020-01-07 11:07:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e71eb8e8-313d-11ea-a994-fa163e34d433 0xc001b8027f 0xc001b802a0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  7 11:07:45.713: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan  7 11:07:45.714: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-v2425,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-v2425/replicasets/test-recreate-deployment-5bf7f65dc,UID:e73032b9-313d-11ea-a994-fa163e34d433,ResourceVersion:17465929,Generation:2,CreationTimestamp:2020-01-07 11:07:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e71eb8e8-313d-11ea-a994-fa163e34d433 0xc001b803a0 0xc001b803a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  7 11:07:45.724: INFO: Pod "test-recreate-deployment-589c4bfd-h4xl9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-h4xl9,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-v2425,SelfLink:/api/v1/namespaces/e2e-tests-deployment-v2425/pods/test-recreate-deployment-589c4bfd-h4xl9,UID:ee979064-313d-11ea-a994-fa163e34d433,ResourceVersion:17465941,Generation:0,CreationTimestamp:2020-01-07 11:07:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd ee95722e-313d-11ea-a994-fa163e34d433 0xc001b8112f 0xc001b81140}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n6jcv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6jcv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n6jcv true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b811a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b811c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 11:07:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 11:07:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 11:07:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 11:07:45 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-07 11:07:45 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:07:45.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-v2425" for this suite.
Jan  7 11:07:55.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:07:55.983: INFO: namespace: e2e-tests-deployment-v2425, resource: bindings, ignored listing per whitelist
Jan  7 11:07:56.007: INFO: namespace e2e-tests-deployment-v2425 deletion completed in 10.273720313s

• [SLOW TEST:23.366 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:07:56.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan  7 11:08:20.288: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bsg2b PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 11:08:20.288: INFO: >>> kubeConfig: /root/.kube/config
I0107 11:08:20.406225       8 log.go:172] (0xc0001e3340) (0xc00076f400) Create stream
I0107 11:08:20.406498       8 log.go:172] (0xc0001e3340) (0xc00076f400) Stream added, broadcasting: 1
I0107 11:08:20.434165       8 log.go:172] (0xc0001e3340) Reply frame received for 1
I0107 11:08:20.434343       8 log.go:172] (0xc0001e3340) (0xc0017ef900) Create stream
I0107 11:08:20.434368       8 log.go:172] (0xc0001e3340) (0xc0017ef900) Stream added, broadcasting: 3
I0107 11:08:20.436367       8 log.go:172] (0xc0001e3340) Reply frame received for 3
I0107 11:08:20.436418       8 log.go:172] (0xc0001e3340) (0xc00076f4a0) Create stream
I0107 11:08:20.436437       8 log.go:172] (0xc0001e3340) (0xc00076f4a0) Stream added, broadcasting: 5
I0107 11:08:20.439886       8 log.go:172] (0xc0001e3340) Reply frame received for 5
I0107 11:08:20.800344       8 log.go:172] (0xc0001e3340) Data frame received for 3
I0107 11:08:20.800537       8 log.go:172] (0xc0017ef900) (3) Data frame handling
I0107 11:08:20.800610       8 log.go:172] (0xc0017ef900) (3) Data frame sent
I0107 11:08:20.988378       8 log.go:172] (0xc0001e3340) (0xc0017ef900) Stream removed, broadcasting: 3
I0107 11:08:20.988536       8 log.go:172] (0xc0001e3340) Data frame received for 1
I0107 11:08:20.988546       8 log.go:172] (0xc00076f400) (1) Data frame handling
I0107 11:08:20.988573       8 log.go:172] (0xc00076f400) (1) Data frame sent
I0107 11:08:20.988619       8 log.go:172] (0xc0001e3340) (0xc00076f400) Stream removed, broadcasting: 1
I0107 11:08:20.988874       8 log.go:172] (0xc0001e3340) (0xc00076f4a0) Stream removed, broadcasting: 5
I0107 11:08:20.988894       8 log.go:172] (0xc0001e3340) Go away received
I0107 11:08:20.989528       8 log.go:172] (0xc0001e3340) (0xc00076f400) Stream removed, broadcasting: 1
I0107 11:08:20.989549       8 log.go:172] (0xc0001e3340) (0xc0017ef900) Stream removed, broadcasting: 3
I0107 11:08:20.989558       8 log.go:172] (0xc0001e3340) (0xc00076f4a0) Stream removed, broadcasting: 5
Jan  7 11:08:20.989: INFO: Exec stderr: ""
Jan  7 11:08:20.989: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bsg2b PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 11:08:20.989: INFO: >>> kubeConfig: /root/.kube/config
I0107 11:08:21.060570       8 log.go:172] (0xc001028580) (0xc001a9a0a0) Create stream
I0107 11:08:21.061160       8 log.go:172] (0xc001028580) (0xc001a9a0a0) Stream added, broadcasting: 1
I0107 11:08:21.066641       8 log.go:172] (0xc001028580) Reply frame received for 1
I0107 11:08:21.066764       8 log.go:172] (0xc001028580) (0xc0017c2f00) Create stream
I0107 11:08:21.066789       8 log.go:172] (0xc001028580) (0xc0017c2f00) Stream added, broadcasting: 3
I0107 11:08:21.070652       8 log.go:172] (0xc001028580) Reply frame received for 3
I0107 11:08:21.070704       8 log.go:172] (0xc001028580) (0xc0017c2fa0) Create stream
I0107 11:08:21.070721       8 log.go:172] (0xc001028580) (0xc0017c2fa0) Stream added, broadcasting: 5
I0107 11:08:21.071792       8 log.go:172] (0xc001028580) Reply frame received for 5
I0107 11:08:21.188424       8 log.go:172] (0xc001028580) Data frame received for 3
I0107 11:08:21.188518       8 log.go:172] (0xc0017c2f00) (3) Data frame handling
I0107 11:08:21.188562       8 log.go:172] (0xc0017c2f00) (3) Data frame sent
I0107 11:08:21.302010       8 log.go:172] (0xc001028580) Data frame received for 1
I0107 11:08:21.302152       8 log.go:172] (0xc001028580) (0xc0017c2f00) Stream removed, broadcasting: 3
I0107 11:08:21.302215       8 log.go:172] (0xc001a9a0a0) (1) Data frame handling
I0107 11:08:21.302240       8 log.go:172] (0xc001a9a0a0) (1) Data frame sent
I0107 11:08:21.302283       8 log.go:172] (0xc001028580) (0xc0017c2fa0) Stream removed, broadcasting: 5
I0107 11:08:21.302318       8 log.go:172] (0xc001028580) (0xc001a9a0a0) Stream removed, broadcasting: 1
I0107 11:08:21.302330       8 log.go:172] (0xc001028580) Go away received
I0107 11:08:21.302803       8 log.go:172] (0xc001028580) (0xc001a9a0a0) Stream removed, broadcasting: 1
I0107 11:08:21.302820       8 log.go:172] (0xc001028580) (0xc0017c2f00) Stream removed, broadcasting: 3
I0107 11:08:21.302830       8 log.go:172] (0xc001028580) (0xc0017c2fa0) Stream removed, broadcasting: 5
Jan  7 11:08:21.302: INFO: Exec stderr: ""
Jan  7 11:08:21.303: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bsg2b PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 11:08:21.303: INFO: >>> kubeConfig: /root/.kube/config
I0107 11:08:21.364589       8 log.go:172] (0xc00213c210) (0xc0017efd60) Create stream
I0107 11:08:21.364751       8 log.go:172] (0xc00213c210) (0xc0017efd60) Stream added, broadcasting: 1
I0107 11:08:21.369469       8 log.go:172] (0xc00213c210) Reply frame received for 1
I0107 11:08:21.369499       8 log.go:172] (0xc00213c210) (0xc001a9a1e0) Create stream
I0107 11:08:21.369510       8 log.go:172] (0xc00213c210) (0xc001a9a1e0) Stream added, broadcasting: 3
I0107 11:08:21.371121       8 log.go:172] (0xc00213c210) Reply frame received for 3
I0107 11:08:21.371149       8 log.go:172] (0xc00213c210) (0xc00076f5e0) Create stream
I0107 11:08:21.371159       8 log.go:172] (0xc00213c210) (0xc00076f5e0) Stream added, broadcasting: 5
I0107 11:08:21.372151       8 log.go:172] (0xc00213c210) Reply frame received for 5
I0107 11:08:21.455943       8 log.go:172] (0xc00213c210) Data frame received for 3
I0107 11:08:21.456043       8 log.go:172] (0xc001a9a1e0) (3) Data frame handling
I0107 11:08:21.456095       8 log.go:172] (0xc001a9a1e0) (3) Data frame sent
I0107 11:08:21.548270       8 log.go:172] (0xc00213c210) (0xc001a9a1e0) Stream removed, broadcasting: 3
I0107 11:08:21.548682       8 log.go:172] (0xc00213c210) Data frame received for 1
I0107 11:08:21.548959       8 log.go:172] (0xc00213c210) (0xc00076f5e0) Stream removed, broadcasting: 5
I0107 11:08:21.548990       8 log.go:172] (0xc0017efd60) (1) Data frame handling
I0107 11:08:21.549016       8 log.go:172] (0xc0017efd60) (1) Data frame sent
I0107 11:08:21.549025       8 log.go:172] (0xc00213c210) (0xc0017efd60) Stream removed, broadcasting: 1
I0107 11:08:21.549041       8 log.go:172] (0xc00213c210) Go away received
I0107 11:08:21.550118       8 log.go:172] (0xc00213c210) (0xc0017efd60) Stream removed, broadcasting: 1
I0107 11:08:21.550242       8 log.go:172] (0xc00213c210) (0xc001a9a1e0) Stream removed, broadcasting: 3
I0107 11:08:21.550261       8 log.go:172] (0xc00213c210) (0xc00076f5e0) Stream removed, broadcasting: 5
Jan  7 11:08:21.550: INFO: Exec stderr: ""
Jan  7 11:08:21.550: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bsg2b PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 11:08:21.550: INFO: >>> kubeConfig: /root/.kube/config
I0107 11:08:21.624327       8 log.go:172] (0xc001028b00) (0xc001a9a460) Create stream
I0107 11:08:21.624456       8 log.go:172] (0xc001028b00) (0xc001a9a460) Stream added, broadcasting: 1
I0107 11:08:21.632264       8 log.go:172] (0xc001028b00) Reply frame received for 1
I0107 11:08:21.632331       8 log.go:172] (0xc001028b00) (0xc00076f680) Create stream
I0107 11:08:21.632347       8 log.go:172] (0xc001028b00) (0xc00076f680) Stream added, broadcasting: 3
I0107 11:08:21.633288       8 log.go:172] (0xc001028b00) Reply frame received for 3
I0107 11:08:21.633321       8 log.go:172] (0xc001028b00) (0xc001a9a500) Create stream
I0107 11:08:21.633333       8 log.go:172] (0xc001028b00) (0xc001a9a500) Stream added, broadcasting: 5
I0107 11:08:21.634379       8 log.go:172] (0xc001028b00) Reply frame received for 5
I0107 11:08:21.710494       8 log.go:172] (0xc001028b00) Data frame received for 3
I0107 11:08:21.710580       8 log.go:172] (0xc00076f680) (3) Data frame handling
I0107 11:08:21.710613       8 log.go:172] (0xc00076f680) (3) Data frame sent
I0107 11:08:21.815922       8 log.go:172] (0xc001028b00) Data frame received for 1
I0107 11:08:21.816010       8 log.go:172] (0xc001028b00) (0xc00076f680) Stream removed, broadcasting: 3
I0107 11:08:21.816077       8 log.go:172] (0xc001a9a460) (1) Data frame handling
I0107 11:08:21.816102       8 log.go:172] (0xc001a9a460) (1) Data frame sent
I0107 11:08:21.816120       8 log.go:172] (0xc001028b00) (0xc001a9a460) Stream removed, broadcasting: 1
I0107 11:08:21.816424       8 log.go:172] (0xc001028b00) (0xc001a9a500) Stream removed, broadcasting: 5
I0107 11:08:21.816502       8 log.go:172] (0xc001028b00) (0xc001a9a460) Stream removed, broadcasting: 1
I0107 11:08:21.816536       8 log.go:172] (0xc001028b00) (0xc00076f680) Stream removed, broadcasting: 3
I0107 11:08:21.816555       8 log.go:172] (0xc001028b00) (0xc001a9a500) Stream removed, broadcasting: 5
I0107 11:08:21.816652       8 log.go:172] (0xc001028b00) Go away received
Jan  7 11:08:21.816: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan  7 11:08:21.816: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bsg2b PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 11:08:21.816: INFO: >>> kubeConfig: /root/.kube/config
I0107 11:08:21.887546       8 log.go:172] (0xc0021ea2c0) (0xc0014da820) Create stream
I0107 11:08:21.887768       8 log.go:172] (0xc0021ea2c0) (0xc0014da820) Stream added, broadcasting: 1
I0107 11:08:21.892805       8 log.go:172] (0xc0021ea2c0) Reply frame received for 1
I0107 11:08:21.892913       8 log.go:172] (0xc0021ea2c0) (0xc0014da8c0) Create stream
I0107 11:08:21.892939       8 log.go:172] (0xc0021ea2c0) (0xc0014da8c0) Stream added, broadcasting: 3
I0107 11:08:21.893831       8 log.go:172] (0xc0021ea2c0) Reply frame received for 3
I0107 11:08:21.893850       8 log.go:172] (0xc0021ea2c0) (0xc001a9a5a0) Create stream
I0107 11:08:21.893859       8 log.go:172] (0xc0021ea2c0) (0xc001a9a5a0) Stream added, broadcasting: 5
I0107 11:08:21.894657       8 log.go:172] (0xc0021ea2c0) Reply frame received for 5
I0107 11:08:22.020302       8 log.go:172] (0xc0021ea2c0) Data frame received for 3
I0107 11:08:22.020409       8 log.go:172] (0xc0014da8c0) (3) Data frame handling
I0107 11:08:22.020437       8 log.go:172] (0xc0014da8c0) (3) Data frame sent
I0107 11:08:22.181494       8 log.go:172] (0xc0021ea2c0) (0xc0014da8c0) Stream removed, broadcasting: 3
I0107 11:08:22.181939       8 log.go:172] (0xc0021ea2c0) Data frame received for 1
I0107 11:08:22.182203       8 log.go:172] (0xc0021ea2c0) (0xc001a9a5a0) Stream removed, broadcasting: 5
I0107 11:08:22.182300       8 log.go:172] (0xc0014da820) (1) Data frame handling
I0107 11:08:22.182326       8 log.go:172] (0xc0014da820) (1) Data frame sent
I0107 11:08:22.182345       8 log.go:172] (0xc0021ea2c0) (0xc0014da820) Stream removed, broadcasting: 1
I0107 11:08:22.182435       8 log.go:172] (0xc0021ea2c0) Go away received
I0107 11:08:22.182917       8 log.go:172] (0xc0021ea2c0) (0xc0014da820) Stream removed, broadcasting: 1
I0107 11:08:22.182967       8 log.go:172] (0xc0021ea2c0) (0xc0014da8c0) Stream removed, broadcasting: 3
I0107 11:08:22.182986       8 log.go:172] (0xc0021ea2c0) (0xc001a9a5a0) Stream removed, broadcasting: 5
Jan  7 11:08:22.183: INFO: Exec stderr: ""
Jan  7 11:08:22.183: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bsg2b PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 11:08:22.183: INFO: >>> kubeConfig: /root/.kube/config
I0107 11:08:22.277235       8 log.go:172] (0xc0021ea790) (0xc0014dab40) Create stream
I0107 11:08:22.277387       8 log.go:172] (0xc0021ea790) (0xc0014dab40) Stream added, broadcasting: 1
I0107 11:08:22.281503       8 log.go:172] (0xc0021ea790) Reply frame received for 1
I0107 11:08:22.281526       8 log.go:172] (0xc0021ea790) (0xc001a9a640) Create stream
I0107 11:08:22.281534       8 log.go:172] (0xc0021ea790) (0xc001a9a640) Stream added, broadcasting: 3
I0107 11:08:22.282452       8 log.go:172] (0xc0021ea790) Reply frame received for 3
I0107 11:08:22.282469       8 log.go:172] (0xc0021ea790) (0xc0017efea0) Create stream
I0107 11:08:22.282476       8 log.go:172] (0xc0021ea790) (0xc0017efea0) Stream added, broadcasting: 5
I0107 11:08:22.283780       8 log.go:172] (0xc0021ea790) Reply frame received for 5
I0107 11:08:22.408334       8 log.go:172] (0xc0021ea790) Data frame received for 3
I0107 11:08:22.408442       8 log.go:172] (0xc001a9a640) (3) Data frame handling
I0107 11:08:22.408465       8 log.go:172] (0xc001a9a640) (3) Data frame sent
I0107 11:08:22.624018       8 log.go:172] (0xc0021ea790) (0xc0017efea0) Stream removed, broadcasting: 5
I0107 11:08:22.624177       8 log.go:172] (0xc0021ea790) Data frame received for 1
I0107 11:08:22.624210       8 log.go:172] (0xc0021ea790) (0xc001a9a640) Stream removed, broadcasting: 3
I0107 11:08:22.624246       8 log.go:172] (0xc0014dab40) (1) Data frame handling
I0107 11:08:22.624272       8 log.go:172] (0xc0014dab40) (1) Data frame sent
I0107 11:08:22.624283       8 log.go:172] (0xc0021ea790) (0xc0014dab40) Stream removed, broadcasting: 1
I0107 11:08:22.624302       8 log.go:172] (0xc0021ea790) Go away received
I0107 11:08:22.624751       8 log.go:172] (0xc0021ea790) (0xc0014dab40) Stream removed, broadcasting: 1
I0107 11:08:22.624769       8 log.go:172] (0xc0021ea790) (0xc001a9a640) Stream removed, broadcasting: 3
I0107 11:08:22.624774       8 log.go:172] (0xc0021ea790) (0xc0017efea0) Stream removed, broadcasting: 5
Jan  7 11:08:22.624: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan  7 11:08:22.625: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bsg2b PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 11:08:22.625: INFO: >>> kubeConfig: /root/.kube/config
I0107 11:08:22.731842       8 log.go:172] (0xc001028fd0) (0xc001a9a8c0) Create stream
I0107 11:08:22.731922       8 log.go:172] (0xc001028fd0) (0xc001a9a8c0) Stream added, broadcasting: 1
I0107 11:08:22.738532       8 log.go:172] (0xc001028fd0) Reply frame received for 1
I0107 11:08:22.738618       8 log.go:172] (0xc001028fd0) (0xc0022d2000) Create stream
I0107 11:08:22.738628       8 log.go:172] (0xc001028fd0) (0xc0022d2000) Stream added, broadcasting: 3
I0107 11:08:22.739635       8 log.go:172] (0xc001028fd0) Reply frame received for 3
I0107 11:08:22.739652       8 log.go:172] (0xc001028fd0) (0xc0017c3180) Create stream
I0107 11:08:22.739660       8 log.go:172] (0xc001028fd0) (0xc0017c3180) Stream added, broadcasting: 5
I0107 11:08:22.740482       8 log.go:172] (0xc001028fd0) Reply frame received for 5
I0107 11:08:22.835859       8 log.go:172] (0xc001028fd0) Data frame received for 3
I0107 11:08:22.836021       8 log.go:172] (0xc0022d2000) (3) Data frame handling
I0107 11:08:22.836059       8 log.go:172] (0xc0022d2000) (3) Data frame sent
I0107 11:08:22.967911       8 log.go:172] (0xc001028fd0) Data frame received for 1
I0107 11:08:22.968090       8 log.go:172] (0xc001028fd0) (0xc0022d2000) Stream removed, broadcasting: 3
I0107 11:08:22.968189       8 log.go:172] (0xc001a9a8c0) (1) Data frame handling
I0107 11:08:22.968234       8 log.go:172] (0xc001a9a8c0) (1) Data frame sent
I0107 11:08:22.968285       8 log.go:172] (0xc001028fd0) (0xc0017c3180) Stream removed, broadcasting: 5
I0107 11:08:22.968337       8 log.go:172] (0xc001028fd0) (0xc001a9a8c0) Stream removed, broadcasting: 1
I0107 11:08:22.968375       8 log.go:172] (0xc001028fd0) Go away received
I0107 11:08:22.968853       8 log.go:172] (0xc001028fd0) (0xc001a9a8c0) Stream removed, broadcasting: 1
I0107 11:08:22.968876       8 log.go:172] (0xc001028fd0) (0xc0022d2000) Stream removed, broadcasting: 3
I0107 11:08:22.968898       8 log.go:172] (0xc001028fd0) (0xc0017c3180) Stream removed, broadcasting: 5
Jan  7 11:08:22.968: INFO: Exec stderr: ""
Jan  7 11:08:22.969: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bsg2b PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 11:08:22.969: INFO: >>> kubeConfig: /root/.kube/config
I0107 11:08:23.043170       8 log.go:172] (0xc0021eac60) (0xc0014dadc0) Create stream
I0107 11:08:23.043398       8 log.go:172] (0xc0021eac60) (0xc0014dadc0) Stream added, broadcasting: 1
I0107 11:08:23.047884       8 log.go:172] (0xc0021eac60) Reply frame received for 1
I0107 11:08:23.047925       8 log.go:172] (0xc0021eac60) (0xc00076f7c0) Create stream
I0107 11:08:23.047941       8 log.go:172] (0xc0021eac60) (0xc00076f7c0) Stream added, broadcasting: 3
I0107 11:08:23.048823       8 log.go:172] (0xc0021eac60) Reply frame received for 3
I0107 11:08:23.048852       8 log.go:172] (0xc0021eac60) (0xc001a9a960) Create stream
I0107 11:08:23.048865       8 log.go:172] (0xc0021eac60) (0xc001a9a960) Stream added, broadcasting: 5
I0107 11:08:23.050584       8 log.go:172] (0xc0021eac60) Reply frame received for 5
I0107 11:08:23.164745       8 log.go:172] (0xc0021eac60) Data frame received for 3
I0107 11:08:23.164851       8 log.go:172] (0xc00076f7c0) (3) Data frame handling
I0107 11:08:23.164901       8 log.go:172] (0xc00076f7c0) (3) Data frame sent
I0107 11:08:23.296162       8 log.go:172] (0xc0021eac60) Data frame received for 1
I0107 11:08:23.296306       8 log.go:172] (0xc0021eac60) (0xc00076f7c0) Stream removed, broadcasting: 3
I0107 11:08:23.296426       8 log.go:172] (0xc0014dadc0) (1) Data frame handling
I0107 11:08:23.296480       8 log.go:172] (0xc0014dadc0) (1) Data frame sent
I0107 11:08:23.296513       8 log.go:172] (0xc0021eac60) (0xc0014dadc0) Stream removed, broadcasting: 1
I0107 11:08:23.297578       8 log.go:172] (0xc0021eac60) (0xc001a9a960) Stream removed, broadcasting: 5
I0107 11:08:23.297821       8 log.go:172] (0xc0021eac60) (0xc0014dadc0) Stream removed, broadcasting: 1
I0107 11:08:23.298008       8 log.go:172] (0xc0021eac60) (0xc00076f7c0) Stream removed, broadcasting: 3
I0107 11:08:23.298025       8 log.go:172] (0xc0021eac60) (0xc001a9a960) Stream removed, broadcasting: 5
I0107 11:08:23.298066       8 log.go:172] (0xc0021eac60) Go away received
Jan  7 11:08:23.298: INFO: Exec stderr: ""
Jan  7 11:08:23.298: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bsg2b PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 11:08:23.298: INFO: >>> kubeConfig: /root/.kube/config
I0107 11:08:23.356878       8 log.go:172] (0xc0010294a0) (0xc001a9ab40) Create stream
I0107 11:08:23.357028       8 log.go:172] (0xc0010294a0) (0xc001a9ab40) Stream added, broadcasting: 1
I0107 11:08:23.363715       8 log.go:172] (0xc0010294a0) Reply frame received for 1
I0107 11:08:23.363764       8 log.go:172] (0xc0010294a0) (0xc0022d2140) Create stream
I0107 11:08:23.363776       8 log.go:172] (0xc0010294a0) (0xc0022d2140) Stream added, broadcasting: 3
I0107 11:08:23.365304       8 log.go:172] (0xc0010294a0) Reply frame received for 3
I0107 11:08:23.365324       8 log.go:172] (0xc0010294a0) (0xc0017c3220) Create stream
I0107 11:08:23.365334       8 log.go:172] (0xc0010294a0) (0xc0017c3220) Stream added, broadcasting: 5
I0107 11:08:23.366163       8 log.go:172] (0xc0010294a0) Reply frame received for 5
I0107 11:08:23.452926       8 log.go:172] (0xc0010294a0) Data frame received for 3
I0107 11:08:23.453091       8 log.go:172] (0xc0022d2140) (3) Data frame handling
I0107 11:08:23.453152       8 log.go:172] (0xc0022d2140) (3) Data frame sent
I0107 11:08:23.570141       8 log.go:172] (0xc0010294a0) (0xc0022d2140) Stream removed, broadcasting: 3
I0107 11:08:23.570491       8 log.go:172] (0xc0010294a0) Data frame received for 1
I0107 11:08:23.570965       8 log.go:172] (0xc0010294a0) (0xc0017c3220) Stream removed, broadcasting: 5
I0107 11:08:23.571260       8 log.go:172] (0xc001a9ab40) (1) Data frame handling
I0107 11:08:23.571316       8 log.go:172] (0xc001a9ab40) (1) Data frame sent
I0107 11:08:23.571352       8 log.go:172] (0xc0010294a0) (0xc001a9ab40) Stream removed, broadcasting: 1
I0107 11:08:23.571402       8 log.go:172] (0xc0010294a0) Go away received
I0107 11:08:23.571864       8 log.go:172] (0xc0010294a0) (0xc001a9ab40) Stream removed, broadcasting: 1
I0107 11:08:23.571884       8 log.go:172] (0xc0010294a0) (0xc0022d2140) Stream removed, broadcasting: 3
I0107 11:08:23.571986       8 log.go:172] (0xc0010294a0) (0xc0017c3220) Stream removed, broadcasting: 5
Jan  7 11:08:23.572: INFO: Exec stderr: ""
Jan  7 11:08:23.572: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bsg2b PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 11:08:23.572: INFO: >>> kubeConfig: /root/.kube/config
I0107 11:08:23.694320       8 log.go:172] (0xc00034be40) (0xc0010700a0) Create stream
I0107 11:08:23.694493       8 log.go:172] (0xc00034be40) (0xc0010700a0) Stream added, broadcasting: 1
I0107 11:08:23.699363       8 log.go:172] (0xc00034be40) Reply frame received for 1
I0107 11:08:23.699442       8 log.go:172] (0xc00034be40) (0xc001b88000) Create stream
I0107 11:08:23.699459       8 log.go:172] (0xc00034be40) (0xc001b88000) Stream added, broadcasting: 3
I0107 11:08:23.700747       8 log.go:172] (0xc00034be40) Reply frame received for 3
I0107 11:08:23.700791       8 log.go:172] (0xc00034be40) (0xc0017ee000) Create stream
I0107 11:08:23.700839       8 log.go:172] (0xc00034be40) (0xc0017ee000) Stream added, broadcasting: 5
I0107 11:08:23.702020       8 log.go:172] (0xc00034be40) Reply frame received for 5
I0107 11:08:23.799201       8 log.go:172] (0xc00034be40) Data frame received for 3
I0107 11:08:23.799386       8 log.go:172] (0xc001b88000) (3) Data frame handling
I0107 11:08:23.799454       8 log.go:172] (0xc001b88000) (3) Data frame sent
I0107 11:08:24.019818       8 log.go:172] (0xc00034be40) Data frame received for 1
I0107 11:08:24.019942       8 log.go:172] (0xc00034be40) (0xc001b88000) Stream removed, broadcasting: 3
I0107 11:08:24.020010       8 log.go:172] (0xc0010700a0) (1) Data frame handling
I0107 11:08:24.020054       8 log.go:172] (0xc0010700a0) (1) Data frame sent
I0107 11:08:24.020086       8 log.go:172] (0xc00034be40) (0xc0010700a0) Stream removed, broadcasting: 1
I0107 11:08:24.020129       8 log.go:172] (0xc00034be40) (0xc0017ee000) Stream removed, broadcasting: 5
I0107 11:08:24.020449       8 log.go:172] (0xc00034be40) Go away received
I0107 11:08:24.020590       8 log.go:172] (0xc00034be40) (0xc0010700a0) Stream removed, broadcasting: 1
I0107 11:08:24.020647       8 log.go:172] (0xc00034be40) (0xc001b88000) Stream removed, broadcasting: 3
I0107 11:08:24.020661       8 log.go:172] (0xc00034be40) (0xc0017ee000) Stream removed, broadcasting: 5
Jan  7 11:08:24.020: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:08:24.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-bsg2b" for this suite.
Jan  7 11:09:14.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:09:14.149: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-bsg2b, resource: bindings, ignored listing per whitelist
Jan  7 11:09:14.213: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-bsg2b deletion completed in 50.171355042s

• [SLOW TEST:78.206 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:09:14.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  7 11:09:14.394: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:09:15.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-5xjkf" for this suite.
Jan  7 11:09:21.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:09:21.740: INFO: namespace: e2e-tests-custom-resource-definition-5xjkf, resource: bindings, ignored listing per whitelist
Jan  7 11:09:21.910: INFO: namespace e2e-tests-custom-resource-definition-5xjkf deletion completed in 6.257827055s

• [SLOW TEST:7.697 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:09:21.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-2840855d-313e-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  7 11:09:22.263: INFO: Waiting up to 5m0s for pod "pod-secrets-284fb50c-313e-11ea-8b51-0242ac110005" in namespace "e2e-tests-secrets-nzqt6" to be "success or failure"
Jan  7 11:09:22.277: INFO: Pod "pod-secrets-284fb50c-313e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.243301ms
Jan  7 11:09:24.297: INFO: Pod "pod-secrets-284fb50c-313e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033514654s
Jan  7 11:09:26.317: INFO: Pod "pod-secrets-284fb50c-313e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053018893s
Jan  7 11:09:28.773: INFO: Pod "pod-secrets-284fb50c-313e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.509467436s
Jan  7 11:09:30.797: INFO: Pod "pod-secrets-284fb50c-313e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.532916007s
Jan  7 11:09:32.827: INFO: Pod "pod-secrets-284fb50c-313e-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.563146918s
STEP: Saw pod success
Jan  7 11:09:32.827: INFO: Pod "pod-secrets-284fb50c-313e-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:09:32.859: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-284fb50c-313e-11ea-8b51-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  7 11:09:33.120: INFO: Waiting for pod pod-secrets-284fb50c-313e-11ea-8b51-0242ac110005 to disappear
Jan  7 11:09:33.129: INFO: Pod pod-secrets-284fb50c-313e-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:09:33.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-nzqt6" for this suite.
Jan  7 11:09:39.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:09:39.272: INFO: namespace: e2e-tests-secrets-nzqt6, resource: bindings, ignored listing per whitelist
Jan  7 11:09:39.467: INFO: namespace e2e-tests-secrets-nzqt6 deletion completed in 6.326826096s

• [SLOW TEST:17.555 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:09:39.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  7 11:09:39.847: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan  7 11:09:45.449: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  7 11:09:49.509: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  7 11:09:49.606: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-nq4x2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nq4x2/deployments/test-cleanup-deployment,UID:3897b207-313e-11ea-a994-fa163e34d433,ResourceVersion:17466230,Generation:1,CreationTimestamp:2020-01-07 11:09:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan  7 11:09:49.840: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Jan  7 11:09:49.840: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan  7 11:09:49.842: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-nq4x2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nq4x2/replicasets/test-cleanup-controller,UID:32ba7f05-313e-11ea-a994-fa163e34d433,ResourceVersion:17466232,Generation:1,CreationTimestamp:2020-01-07 11:09:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 3897b207-313e-11ea-a994-fa163e34d433 0xc00167f127 0xc00167f128}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  7 11:09:49.870: INFO: Pod "test-cleanup-controller-2dxkt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-2dxkt,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-nq4x2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nq4x2/pods/test-cleanup-controller-2dxkt,UID:32d2e29d-313e-11ea-a994-fa163e34d433,ResourceVersion:17466227,Generation:0,CreationTimestamp:2020-01-07 11:09:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 32ba7f05-313e-11ea-a994-fa163e34d433 0xc000ff6917 0xc000ff6918}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-7v4hm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7v4hm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-7v4hm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ff6980} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ff69a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 11:09:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 11:09:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 11:09:48 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 11:09:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-07 11:09:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-07 11:09:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://76f9703dc6aecd110c2f8c4d8f3663e4105324bc7a2c3c1fb92c819c4f0be55d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:09:49.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-nq4x2" for this suite.
Jan  7 11:10:00.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:10:00.853: INFO: namespace: e2e-tests-deployment-nq4x2, resource: bindings, ignored listing per whitelist
Jan  7 11:10:01.006: INFO: namespace e2e-tests-deployment-nq4x2 deletion completed in 10.961624174s

• [SLOW TEST:21.539 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:10:01.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-s8nbk A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-s8nbk;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-s8nbk A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-s8nbk;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-s8nbk.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-s8nbk.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-s8nbk.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-s8nbk.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-s8nbk.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-s8nbk.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-s8nbk.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s8nbk.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-s8nbk.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-s8nbk.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-s8nbk.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-s8nbk.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-s8nbk.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 70.100.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.100.70_udp@PTR;check="$$(dig +tcp +noall +answer +search 70.100.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.100.70_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-s8nbk A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-s8nbk;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-s8nbk A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-s8nbk;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-s8nbk.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-s8nbk.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-s8nbk.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-s8nbk.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-s8nbk.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s8nbk.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-s8nbk.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s8nbk.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-s8nbk.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-s8nbk.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-s8nbk.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-s8nbk.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-s8nbk.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 70.100.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.100.70_udp@PTR;check="$$(dig +tcp +noall +answer +search 70.100.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.100.70_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  7 11:10:17.986: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-s8nbk/dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005)
Jan  7 11:10:17.993: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-s8nbk/dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005)
Jan  7 11:10:18.001: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-s8nbk from pod e2e-tests-dns-s8nbk/dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005)
Jan  7 11:10:18.006: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-s8nbk from pod e2e-tests-dns-s8nbk/dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005)
Jan  7 11:10:18.019: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-s8nbk.svc from pod e2e-tests-dns-s8nbk/dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005)
Jan  7 11:10:18.025: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-s8nbk.svc from pod e2e-tests-dns-s8nbk/dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005)
Jan  7 11:10:18.028: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s8nbk.svc from pod e2e-tests-dns-s8nbk/dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005)
Jan  7 11:10:18.032: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s8nbk.svc from pod e2e-tests-dns-s8nbk/dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005)
Jan  7 11:10:18.050: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-s8nbk.svc from pod e2e-tests-dns-s8nbk/dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005)
Jan  7 11:10:18.056: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-s8nbk.svc from pod e2e-tests-dns-s8nbk/dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005)
Jan  7 11:10:18.097: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-s8nbk/dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005)
Jan  7 11:10:18.103: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-s8nbk/dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005)
Jan  7 11:10:18.116: INFO: Lookups using e2e-tests-dns-s8nbk/dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005 failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-s8nbk jessie_tcp@dns-test-service.e2e-tests-dns-s8nbk jessie_udp@dns-test-service.e2e-tests-dns-s8nbk.svc jessie_tcp@dns-test-service.e2e-tests-dns-s8nbk.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-s8nbk.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-s8nbk.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-s8nbk.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-s8nbk.svc jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan  7 11:10:23.348: INFO: DNS probes using e2e-tests-dns-s8nbk/dns-test-3fa1ab72-313e-11ea-8b51-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:10:23.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-s8nbk" for this suite.
Jan  7 11:10:31.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:10:31.901: INFO: namespace: e2e-tests-dns-s8nbk, resource: bindings, ignored listing per whitelist
Jan  7 11:10:32.039: INFO: namespace e2e-tests-dns-s8nbk deletion completed in 8.284539164s

• [SLOW TEST:31.033 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:10:32.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jan  7 11:10:40.450: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-5210416b-313e-11ea-8b51-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-dwtnc", SelfLink:"/api/v1/namespaces/e2e-tests-pods-dwtnc/pods/pod-submit-remove-5210416b-313e-11ea-8b51-0242ac110005", UID:"5212345d-313e-11ea-a994-fa163e34d433", ResourceVersion:"17466414", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713992232, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"275220477"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-f4v7n", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000bc4ac0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-f4v7n", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000e9d198), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ae5620), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000e9d1d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000e9d1f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000e9d1f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000e9d1fc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992232, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992239, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992239, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992232, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000d73f00), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000d73f20), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://66c66b7605d6b050bf5cce88ea04ef68fb68791ec0dbc562d0668d36d62bafad"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:10:46.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-dwtnc" for this suite.
Jan  7 11:10:53.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:10:53.276: INFO: namespace: e2e-tests-pods-dwtnc, resource: bindings, ignored listing per whitelist
Jan  7 11:10:53.326: INFO: namespace e2e-tests-pods-dwtnc deletion completed in 6.304893613s

• [SLOW TEST:21.287 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:10:53.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  7 11:11:13.912: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  7 11:11:13.918: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  7 11:11:15.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  7 11:11:15.941: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  7 11:11:17.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  7 11:11:17.997: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  7 11:11:19.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  7 11:11:19.948: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  7 11:11:21.920: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  7 11:11:22.003: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  7 11:11:23.920: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  7 11:11:23.947: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  7 11:11:25.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  7 11:11:25.933: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  7 11:11:27.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  7 11:11:27.940: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  7 11:11:29.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  7 11:11:29.935: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  7 11:11:31.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  7 11:11:31.944: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  7 11:11:33.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  7 11:11:33.949: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  7 11:11:35.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  7 11:11:35.964: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:11:36.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-s2mhv" for this suite.
Jan  7 11:12:00.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:12:00.137: INFO: namespace: e2e-tests-container-lifecycle-hook-s2mhv, resource: bindings, ignored listing per whitelist
Jan  7 11:12:00.227: INFO: namespace e2e-tests-container-lifecycle-hook-s2mhv deletion completed in 24.203544022s

• [SLOW TEST:66.900 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:12:00.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-869c0d4e-313e-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  7 11:12:00.592: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-869dd103-313e-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-b5lgl" to be "success or failure"
Jan  7 11:12:00.625: INFO: Pod "pod-projected-secrets-869dd103-313e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.962882ms
Jan  7 11:12:02.652: INFO: Pod "pod-projected-secrets-869dd103-313e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060317422s
Jan  7 11:12:04.680: INFO: Pod "pod-projected-secrets-869dd103-313e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088364806s
Jan  7 11:12:06.871: INFO: Pod "pod-projected-secrets-869dd103-313e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.279084907s
Jan  7 11:12:08.897: INFO: Pod "pod-projected-secrets-869dd103-313e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.305400795s
Jan  7 11:12:11.574: INFO: Pod "pod-projected-secrets-869dd103-313e-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.981827819s
STEP: Saw pod success
Jan  7 11:12:11.574: INFO: Pod "pod-projected-secrets-869dd103-313e-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:12:11.589: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-869dd103-313e-11ea-8b51-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  7 11:12:12.200: INFO: Waiting for pod pod-projected-secrets-869dd103-313e-11ea-8b51-0242ac110005 to disappear
Jan  7 11:12:12.228: INFO: Pod pod-projected-secrets-869dd103-313e-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:12:12.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b5lgl" for this suite.
Jan  7 11:12:18.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:12:18.431: INFO: namespace: e2e-tests-projected-b5lgl, resource: bindings, ignored listing per whitelist
Jan  7 11:12:18.560: INFO: namespace e2e-tests-projected-b5lgl deletion completed in 6.26529214s

• [SLOW TEST:18.333 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:12:18.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jan  7 11:12:18.820: INFO: Waiting up to 5m0s for pod "client-containers-918e3b42-313e-11ea-8b51-0242ac110005" in namespace "e2e-tests-containers-krzv6" to be "success or failure"
Jan  7 11:12:18.828: INFO: Pod "client-containers-918e3b42-313e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.280636ms
Jan  7 11:12:21.341: INFO: Pod "client-containers-918e3b42-313e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.520520453s
Jan  7 11:12:23.359: INFO: Pod "client-containers-918e3b42-313e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.539256473s
Jan  7 11:12:25.376: INFO: Pod "client-containers-918e3b42-313e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.556227353s
Jan  7 11:12:27.390: INFO: Pod "client-containers-918e3b42-313e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.56986444s
Jan  7 11:12:29.412: INFO: Pod "client-containers-918e3b42-313e-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.591501766s
STEP: Saw pod success
Jan  7 11:12:29.412: INFO: Pod "client-containers-918e3b42-313e-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:12:29.417: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-918e3b42-313e-11ea-8b51-0242ac110005 container test-container: 
STEP: delete the pod
Jan  7 11:12:29.959: INFO: Waiting for pod client-containers-918e3b42-313e-11ea-8b51-0242ac110005 to disappear
Jan  7 11:12:30.418: INFO: Pod client-containers-918e3b42-313e-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:12:30.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-krzv6" for this suite.
Jan  7 11:12:36.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:12:36.962: INFO: namespace: e2e-tests-containers-krzv6, resource: bindings, ignored listing per whitelist
Jan  7 11:12:37.125: INFO: namespace e2e-tests-containers-krzv6 deletion completed in 6.405157723s

• [SLOW TEST:18.565 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:12:37.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan  7 11:12:47.384: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-9c988606-313e-11ea-8b51-0242ac110005,GenerateName:,Namespace:e2e-tests-events-mftqg,SelfLink:/api/v1/namespaces/e2e-tests-events-mftqg/pods/send-events-9c988606-313e-11ea-8b51-0242ac110005,UID:9c99c1fe-313e-11ea-a994-fa163e34d433,ResourceVersion:17466686,Generation:0,CreationTimestamp:2020-01-07 11:12:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 319655959,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ddt5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ddt5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-ddt5z true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002121670} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002121690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 11:12:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 11:12:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 11:12:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 11:12:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-07 11:12:37 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-07 11:12:46 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://5a3ebd36b494ef2c2ad18ae2fdb7478227b78106d1f84d965c351e28a8f57b99}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan  7 11:12:49.419: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan  7 11:12:51.456: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:12:51.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-mftqg" for this suite.
Jan  7 11:13:29.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:13:29.723: INFO: namespace: e2e-tests-events-mftqg, resource: bindings, ignored listing per whitelist
Jan  7 11:13:29.772: INFO: namespace e2e-tests-events-mftqg deletion completed in 38.179503135s

• [SLOW TEST:52.646 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:13:29.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jan  7 11:13:30.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lg6k8'
Jan  7 11:13:32.207: INFO: stderr: ""
Jan  7 11:13:32.207: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jan  7 11:13:33.222: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 11:13:33.222: INFO: Found 0 / 1
Jan  7 11:13:34.230: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 11:13:34.230: INFO: Found 0 / 1
Jan  7 11:13:35.227: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 11:13:35.228: INFO: Found 0 / 1
Jan  7 11:13:36.222: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 11:13:36.222: INFO: Found 0 / 1
Jan  7 11:13:37.758: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 11:13:37.759: INFO: Found 0 / 1
Jan  7 11:13:38.223: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 11:13:38.224: INFO: Found 0 / 1
Jan  7 11:13:39.491: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 11:13:39.492: INFO: Found 0 / 1
Jan  7 11:13:40.233: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 11:13:40.233: INFO: Found 0 / 1
Jan  7 11:13:41.218: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 11:13:41.218: INFO: Found 0 / 1
Jan  7 11:13:42.230: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 11:13:42.230: INFO: Found 1 / 1
Jan  7 11:13:42.230: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  7 11:13:42.239: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 11:13:42.239: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan  7 11:13:42.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-s7d9k redis-master --namespace=e2e-tests-kubectl-lg6k8'
Jan  7 11:13:42.539: INFO: stderr: ""
Jan  7 11:13:42.539: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 07 Jan 11:13:40.223 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Jan 11:13:40.223 # Server started, Redis version 3.2.12\n1:M 07 Jan 11:13:40.223 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Jan 11:13:40.223 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan  7 11:13:42.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-s7d9k redis-master --namespace=e2e-tests-kubectl-lg6k8 --tail=1'
Jan  7 11:13:42.860: INFO: stderr: ""
Jan  7 11:13:42.860: INFO: stdout: "1:M 07 Jan 11:13:40.223 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan  7 11:13:42.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-s7d9k redis-master --namespace=e2e-tests-kubectl-lg6k8 --limit-bytes=1'
Jan  7 11:13:43.033: INFO: stderr: ""
Jan  7 11:13:43.033: INFO: stdout: " "
STEP: exposing timestamps
Jan  7 11:13:43.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-s7d9k redis-master --namespace=e2e-tests-kubectl-lg6k8 --tail=1 --timestamps'
Jan  7 11:13:43.207: INFO: stderr: ""
Jan  7 11:13:43.207: INFO: stdout: "2020-01-07T11:13:40.229017001Z 1:M 07 Jan 11:13:40.223 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan  7 11:13:45.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-s7d9k redis-master --namespace=e2e-tests-kubectl-lg6k8 --since=1s'
Jan  7 11:13:45.978: INFO: stderr: ""
Jan  7 11:13:45.978: INFO: stdout: ""
Jan  7 11:13:45.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-s7d9k redis-master --namespace=e2e-tests-kubectl-lg6k8 --since=24h'
Jan  7 11:13:46.136: INFO: stderr: ""
Jan  7 11:13:46.136: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 07 Jan 11:13:40.223 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Jan 11:13:40.223 # Server started, Redis version 3.2.12\n1:M 07 Jan 11:13:40.223 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Jan 11:13:40.223 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jan  7 11:13:46.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lg6k8'
Jan  7 11:13:46.287: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  7 11:13:46.287: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan  7 11:13:46.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-lg6k8'
Jan  7 11:13:46.427: INFO: stderr: "No resources found.\n"
Jan  7 11:13:46.427: INFO: stdout: ""
Jan  7 11:13:46.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-lg6k8 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  7 11:13:46.584: INFO: stderr: ""
Jan  7 11:13:46.585: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:13:46.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lg6k8" for this suite.
Jan  7 11:14:10.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:14:10.783: INFO: namespace: e2e-tests-kubectl-lg6k8, resource: bindings, ignored listing per whitelist
Jan  7 11:14:10.802: INFO: namespace e2e-tests-kubectl-lg6k8 deletion completed in 24.194898167s

• [SLOW TEST:41.029 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:14:10.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:14:24.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-crhv2" for this suite.
Jan  7 11:14:38.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:14:38.625: INFO: namespace: e2e-tests-replication-controller-crhv2, resource: bindings, ignored listing per whitelist
Jan  7 11:14:38.670: INFO: namespace e2e-tests-replication-controller-crhv2 deletion completed in 14.328472344s

• [SLOW TEST:27.867 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:14:38.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0107 11:14:41.208589       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  7 11:14:41.208: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:14:41.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-xgqfv" for this suite.
Jan  7 11:14:49.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:14:49.406: INFO: namespace: e2e-tests-gc-xgqfv, resource: bindings, ignored listing per whitelist
Jan  7 11:14:49.591: INFO: namespace e2e-tests-gc-xgqfv deletion completed in 8.373558274s

• [SLOW TEST:10.921 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:14:49.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jan  7 11:14:49.715: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan  7 11:14:49.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qs8fr'
Jan  7 11:14:50.243: INFO: stderr: ""
Jan  7 11:14:50.243: INFO: stdout: "service/redis-slave created\n"
Jan  7 11:14:50.245: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan  7 11:14:50.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qs8fr'
Jan  7 11:14:50.900: INFO: stderr: ""
Jan  7 11:14:50.900: INFO: stdout: "service/redis-master created\n"
Jan  7 11:14:50.901: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan  7 11:14:50.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qs8fr'
Jan  7 11:14:51.381: INFO: stderr: ""
Jan  7 11:14:51.381: INFO: stdout: "service/frontend created\n"
Jan  7 11:14:51.383: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan  7 11:14:51.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qs8fr'
Jan  7 11:14:51.868: INFO: stderr: ""
Jan  7 11:14:51.868: INFO: stdout: "deployment.extensions/frontend created\n"
Jan  7 11:14:51.870: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan  7 11:14:51.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qs8fr'
Jan  7 11:14:52.928: INFO: stderr: ""
Jan  7 11:14:52.928: INFO: stdout: "deployment.extensions/redis-master created\n"
Jan  7 11:14:52.930: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan  7 11:14:52.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qs8fr'
Jan  7 11:14:53.405: INFO: stderr: ""
Jan  7 11:14:53.405: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jan  7 11:14:53.405: INFO: Waiting for all frontend pods to be Running.
Jan  7 11:15:23.459: INFO: Waiting for frontend to serve content.
Jan  7 11:15:25.006: INFO: Trying to add a new entry to the guestbook.
Jan  7 11:15:25.042: INFO: Verifying that added entry can be retrieved.
Jan  7 11:15:25.079: INFO: Failed to get response from guestbook. err: , response: {"data": ""}
STEP: using delete to clean up resources
Jan  7 11:15:30.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qs8fr'
Jan  7 11:15:30.654: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  7 11:15:30.654: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan  7 11:15:30.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qs8fr'
Jan  7 11:15:31.070: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  7 11:15:31.071: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  7 11:15:31.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qs8fr'
Jan  7 11:15:31.340: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  7 11:15:31.341: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  7 11:15:31.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qs8fr'
Jan  7 11:15:31.549: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  7 11:15:31.549: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  7 11:15:31.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qs8fr'
Jan  7 11:15:31.782: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  7 11:15:31.782: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  7 11:15:31.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qs8fr'
Jan  7 11:15:32.242: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  7 11:15:32.243: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:15:32.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qs8fr" for this suite.
Jan  7 11:16:18.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:16:18.761: INFO: namespace: e2e-tests-kubectl-qs8fr, resource: bindings, ignored listing per whitelist
Jan  7 11:16:18.794: INFO: namespace e2e-tests-kubectl-qs8fr deletion completed in 46.514387843s

• [SLOW TEST:89.202 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:16:18.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  7 11:16:19.367: INFO: Number of nodes with available pods: 0
Jan  7 11:16:19.368: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:21.246: INFO: Number of nodes with available pods: 0
Jan  7 11:16:21.247: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:21.808: INFO: Number of nodes with available pods: 0
Jan  7 11:16:21.808: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:22.395: INFO: Number of nodes with available pods: 0
Jan  7 11:16:22.395: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:23.431: INFO: Number of nodes with available pods: 0
Jan  7 11:16:23.431: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:24.405: INFO: Number of nodes with available pods: 0
Jan  7 11:16:24.405: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:25.883: INFO: Number of nodes with available pods: 0
Jan  7 11:16:25.883: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:26.419: INFO: Number of nodes with available pods: 0
Jan  7 11:16:26.420: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:27.391: INFO: Number of nodes with available pods: 0
Jan  7 11:16:27.391: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:28.392: INFO: Number of nodes with available pods: 0
Jan  7 11:16:28.393: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:29.414: INFO: Number of nodes with available pods: 1
Jan  7 11:16:29.414: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan  7 11:16:29.488: INFO: Number of nodes with available pods: 0
Jan  7 11:16:29.489: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:30.588: INFO: Number of nodes with available pods: 0
Jan  7 11:16:30.589: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:31.587: INFO: Number of nodes with available pods: 0
Jan  7 11:16:31.587: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:32.534: INFO: Number of nodes with available pods: 0
Jan  7 11:16:32.534: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:33.863: INFO: Number of nodes with available pods: 0
Jan  7 11:16:33.864: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:34.634: INFO: Number of nodes with available pods: 0
Jan  7 11:16:34.635: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:35.765: INFO: Number of nodes with available pods: 0
Jan  7 11:16:35.766: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:36.535: INFO: Number of nodes with available pods: 0
Jan  7 11:16:36.535: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:37.508: INFO: Number of nodes with available pods: 0
Jan  7 11:16:37.508: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:38.548: INFO: Number of nodes with available pods: 0
Jan  7 11:16:38.548: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:39.516: INFO: Number of nodes with available pods: 0
Jan  7 11:16:39.516: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:40.578: INFO: Number of nodes with available pods: 0
Jan  7 11:16:40.578: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:41.557: INFO: Number of nodes with available pods: 0
Jan  7 11:16:41.557: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:42.566: INFO: Number of nodes with available pods: 0
Jan  7 11:16:42.566: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:43.510: INFO: Number of nodes with available pods: 0
Jan  7 11:16:43.510: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:44.575: INFO: Number of nodes with available pods: 0
Jan  7 11:16:44.575: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:45.507: INFO: Number of nodes with available pods: 0
Jan  7 11:16:45.507: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:46.547: INFO: Number of nodes with available pods: 0
Jan  7 11:16:46.548: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:47.507: INFO: Number of nodes with available pods: 0
Jan  7 11:16:47.507: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:49.574: INFO: Number of nodes with available pods: 0
Jan  7 11:16:49.574: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:50.517: INFO: Number of nodes with available pods: 0
Jan  7 11:16:50.517: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:16:51.514: INFO: Number of nodes with available pods: 1
Jan  7 11:16:51.514: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-c5f6j, will wait for the garbage collector to delete the pods
Jan  7 11:16:51.600: INFO: Deleting DaemonSet.extensions daemon-set took: 19.51104ms
Jan  7 11:16:51.800: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.6662ms
Jan  7 11:16:59.325: INFO: Number of nodes with available pods: 0
Jan  7 11:16:59.325: INFO: Number of running nodes: 0, number of available pods: 0
Jan  7 11:16:59.330: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-c5f6j/daemonsets","resourceVersion":"17467319"},"items":null}

Jan  7 11:16:59.335: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-c5f6j/pods","resourceVersion":"17467319"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:16:59.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-c5f6j" for this suite.
Jan  7 11:17:05.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:17:05.585: INFO: namespace: e2e-tests-daemonsets-c5f6j, resource: bindings, ignored listing per whitelist
Jan  7 11:17:05.730: INFO: namespace e2e-tests-daemonsets-c5f6j deletion completed in 6.35508248s

• [SLOW TEST:46.935 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:17:05.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  7 11:17:06.068: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3cc5408d-313f-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-vd2l6" to be "success or failure"
Jan  7 11:17:06.163: INFO: Pod "downwardapi-volume-3cc5408d-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 94.714176ms
Jan  7 11:17:08.183: INFO: Pod "downwardapi-volume-3cc5408d-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114642847s
Jan  7 11:17:10.215: INFO: Pod "downwardapi-volume-3cc5408d-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146910673s
Jan  7 11:17:12.247: INFO: Pod "downwardapi-volume-3cc5408d-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.179251011s
Jan  7 11:17:14.667: INFO: Pod "downwardapi-volume-3cc5408d-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.598963343s
Jan  7 11:17:16.682: INFO: Pod "downwardapi-volume-3cc5408d-313f-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.614035431s
STEP: Saw pod success
Jan  7 11:17:16.682: INFO: Pod "downwardapi-volume-3cc5408d-313f-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:17:16.689: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3cc5408d-313f-11ea-8b51-0242ac110005 container client-container: 
STEP: delete the pod
Jan  7 11:17:17.320: INFO: Waiting for pod downwardapi-volume-3cc5408d-313f-11ea-8b51-0242ac110005 to disappear
Jan  7 11:17:17.360: INFO: Pod downwardapi-volume-3cc5408d-313f-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:17:17.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vd2l6" for this suite.
Jan  7 11:17:23.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:17:23.605: INFO: namespace: e2e-tests-projected-vd2l6, resource: bindings, ignored listing per whitelist
Jan  7 11:17:23.728: INFO: namespace e2e-tests-projected-vd2l6 deletion completed in 6.34963022s

• [SLOW TEST:17.998 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:17:23.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-477af830-313f-11ea-8b51-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-477af816-313f-11ea-8b51-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan  7 11:17:24.058: INFO: Waiting up to 5m0s for pod "projected-volume-477af66b-313f-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-54f64" to be "success or failure"
Jan  7 11:17:24.164: INFO: Pod "projected-volume-477af66b-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 105.485765ms
Jan  7 11:17:26.182: INFO: Pod "projected-volume-477af66b-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12410823s
Jan  7 11:17:28.202: INFO: Pod "projected-volume-477af66b-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144078208s
Jan  7 11:17:30.285: INFO: Pod "projected-volume-477af66b-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.226478573s
Jan  7 11:17:32.297: INFO: Pod "projected-volume-477af66b-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.238681719s
Jan  7 11:17:34.312: INFO: Pod "projected-volume-477af66b-313f-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.253975226s
STEP: Saw pod success
Jan  7 11:17:34.312: INFO: Pod "projected-volume-477af66b-313f-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:17:34.323: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-477af66b-313f-11ea-8b51-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Jan  7 11:17:34.556: INFO: Waiting for pod projected-volume-477af66b-313f-11ea-8b51-0242ac110005 to disappear
Jan  7 11:17:34.572: INFO: Pod projected-volume-477af66b-313f-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:17:34.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-54f64" for this suite.
Jan  7 11:17:40.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:17:40.877: INFO: namespace: e2e-tests-projected-54f64, resource: bindings, ignored listing per whitelist
Jan  7 11:17:40.982: INFO: namespace e2e-tests-projected-54f64 deletion completed in 6.388309207s

• [SLOW TEST:17.254 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:17:40.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-wcvlq
Jan  7 11:17:51.274: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-wcvlq
STEP: checking the pod's current state and verifying that restartCount is present
Jan  7 11:17:51.280: INFO: Initial restart count of pod liveness-http is 0
Jan  7 11:18:13.906: INFO: Restart count of pod e2e-tests-container-probe-wcvlq/liveness-http is now 1 (22.626520806s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:18:13.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-wcvlq" for this suite.
Jan  7 11:18:20.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:18:20.197: INFO: namespace: e2e-tests-container-probe-wcvlq, resource: bindings, ignored listing per whitelist
Jan  7 11:18:20.264: INFO: namespace e2e-tests-container-probe-wcvlq deletion completed in 6.188069556s

• [SLOW TEST:39.282 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:18:20.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jan  7 11:18:20.379: INFO: Waiting up to 5m0s for pod "client-containers-6911ff12-313f-11ea-8b51-0242ac110005" in namespace "e2e-tests-containers-dfrb7" to be "success or failure"
Jan  7 11:18:20.444: INFO: Pod "client-containers-6911ff12-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 64.242483ms
Jan  7 11:18:22.457: INFO: Pod "client-containers-6911ff12-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078021131s
Jan  7 11:18:24.477: INFO: Pod "client-containers-6911ff12-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097946468s
Jan  7 11:18:26.509: INFO: Pod "client-containers-6911ff12-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129282103s
Jan  7 11:18:29.070: INFO: Pod "client-containers-6911ff12-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.690738633s
Jan  7 11:18:31.084: INFO: Pod "client-containers-6911ff12-313f-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.704783401s
STEP: Saw pod success
Jan  7 11:18:31.084: INFO: Pod "client-containers-6911ff12-313f-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:18:31.089: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-6911ff12-313f-11ea-8b51-0242ac110005 container test-container: 
STEP: delete the pod
Jan  7 11:18:31.332: INFO: Waiting for pod client-containers-6911ff12-313f-11ea-8b51-0242ac110005 to disappear
Jan  7 11:18:31.347: INFO: Pod client-containers-6911ff12-313f-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:18:31.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-dfrb7" for this suite.
Jan  7 11:18:39.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:18:39.564: INFO: namespace: e2e-tests-containers-dfrb7, resource: bindings, ignored listing per whitelist
Jan  7 11:18:39.588: INFO: namespace e2e-tests-containers-dfrb7 deletion completed in 8.23153708s

• [SLOW TEST:19.323 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:18:39.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:18:49.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-sqf45" for this suite.
Jan  7 11:19:33.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:19:34.116: INFO: namespace: e2e-tests-kubelet-test-sqf45, resource: bindings, ignored listing per whitelist
Jan  7 11:19:34.124: INFO: namespace e2e-tests-kubelet-test-sqf45 deletion completed in 44.202872588s

• [SLOW TEST:54.535 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:19:34.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  7 11:19:34.309: INFO: Waiting up to 5m0s for pod "downward-api-9522bdd3-313f-11ea-8b51-0242ac110005" in namespace "e2e-tests-downward-api-rvpnf" to be "success or failure"
Jan  7 11:19:34.342: INFO: Pod "downward-api-9522bdd3-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.823829ms
Jan  7 11:19:36.360: INFO: Pod "downward-api-9522bdd3-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050581334s
Jan  7 11:19:38.382: INFO: Pod "downward-api-9522bdd3-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073378198s
Jan  7 11:19:40.619: INFO: Pod "downward-api-9522bdd3-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.310259099s
Jan  7 11:19:42.662: INFO: Pod "downward-api-9522bdd3-313f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.352990206s
Jan  7 11:19:44.681: INFO: Pod "downward-api-9522bdd3-313f-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.371933885s
STEP: Saw pod success
Jan  7 11:19:44.681: INFO: Pod "downward-api-9522bdd3-313f-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:19:44.687: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-9522bdd3-313f-11ea-8b51-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  7 11:19:44.743: INFO: Waiting for pod downward-api-9522bdd3-313f-11ea-8b51-0242ac110005 to disappear
Jan  7 11:19:44.748: INFO: Pod downward-api-9522bdd3-313f-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:19:44.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rvpnf" for this suite.
Jan  7 11:19:50.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:19:51.044: INFO: namespace: e2e-tests-downward-api-rvpnf, resource: bindings, ignored listing per whitelist
Jan  7 11:19:51.173: INFO: namespace e2e-tests-downward-api-rvpnf deletion completed in 6.41842812s

• [SLOW TEST:17.049 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:19:51.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0107 11:20:08.520675       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  7 11:20:08.521: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:20:08.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-5ghhv" for this suite.
Jan  7 11:20:29.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:20:29.710: INFO: namespace: e2e-tests-gc-5ghhv, resource: bindings, ignored listing per whitelist
Jan  7 11:20:30.223: INFO: namespace e2e-tests-gc-5ghhv deletion completed in 21.66192086s

• [SLOW TEST:39.049 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:20:30.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  7 11:20:30.797: INFO: PodSpec: initContainers in spec.initContainers
Jan  7 11:21:45.684: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b6cf799e-313f-11ea-8b51-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-tzdvn", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-tzdvn/pods/pod-init-b6cf799e-313f-11ea-8b51-0242ac110005", UID:"b6dd2ae5-313f-11ea-a994-fa163e34d433", ResourceVersion:"17467971", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713992830, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"797314066"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-pgcmc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0016e8040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pgcmc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pgcmc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pgcmc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00173c138), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0014020c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00173c1c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00173c1e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00173c1e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00173c1ec)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992834, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992834, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992834, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713992830, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000d72a60), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00171df80)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000f16000)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://124d8305e17491c14e8f19d1ed4fcd6a2734c6a614fa8735559a9bfdb08bb342"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000d72ea0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000d72cc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:21:45.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-tzdvn" for this suite.
Jan  7 11:22:09.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:22:10.121: INFO: namespace: e2e-tests-init-container-tzdvn, resource: bindings, ignored listing per whitelist
Jan  7 11:22:10.133: INFO: namespace e2e-tests-init-container-tzdvn deletion completed in 24.424766757s

• [SLOW TEST:99.910 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:22:10.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-f213fc8e-313f-11ea-8b51-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:22:22.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-nrcht" for this suite.
Jan  7 11:22:46.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:22:46.802: INFO: namespace: e2e-tests-configmap-nrcht, resource: bindings, ignored listing per whitelist
Jan  7 11:22:46.833: INFO: namespace e2e-tests-configmap-nrcht deletion completed in 24.353055553s

• [SLOW TEST:36.700 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:22:46.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-6p8c
STEP: Creating a pod to test atomic-volume-subpath
Jan  7 11:22:47.237: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-6p8c" in namespace "e2e-tests-subpath-zdmj7" to be "success or failure"
Jan  7 11:22:47.272: INFO: Pod "pod-subpath-test-downwardapi-6p8c": Phase="Pending", Reason="", readiness=false. Elapsed: 34.351598ms
Jan  7 11:22:49.287: INFO: Pod "pod-subpath-test-downwardapi-6p8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049598194s
Jan  7 11:22:51.301: INFO: Pod "pod-subpath-test-downwardapi-6p8c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063212754s
Jan  7 11:22:53.644: INFO: Pod "pod-subpath-test-downwardapi-6p8c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.406650765s
Jan  7 11:22:55.657: INFO: Pod "pod-subpath-test-downwardapi-6p8c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.419426561s
Jan  7 11:22:57.675: INFO: Pod "pod-subpath-test-downwardapi-6p8c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.437816743s
Jan  7 11:22:59.692: INFO: Pod "pod-subpath-test-downwardapi-6p8c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.45486278s
Jan  7 11:23:01.714: INFO: Pod "pod-subpath-test-downwardapi-6p8c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.476783241s
Jan  7 11:23:03.740: INFO: Pod "pod-subpath-test-downwardapi-6p8c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.502442398s
Jan  7 11:23:05.756: INFO: Pod "pod-subpath-test-downwardapi-6p8c": Phase="Running", Reason="", readiness=false. Elapsed: 18.518560614s
Jan  7 11:23:07.774: INFO: Pod "pod-subpath-test-downwardapi-6p8c": Phase="Running", Reason="", readiness=false. Elapsed: 20.536657898s
Jan  7 11:23:09.796: INFO: Pod "pod-subpath-test-downwardapi-6p8c": Phase="Running", Reason="", readiness=false. Elapsed: 22.558076121s
Jan  7 11:23:11.819: INFO: Pod "pod-subpath-test-downwardapi-6p8c": Phase="Running", Reason="", readiness=false. Elapsed: 24.581339622s
Jan  7 11:23:13.842: INFO: Pod "pod-subpath-test-downwardapi-6p8c": Phase="Running", Reason="", readiness=false. Elapsed: 26.604081234s
Jan  7 11:23:15.876: INFO: Pod "pod-subpath-test-downwardapi-6p8c": Phase="Running", Reason="", readiness=false. Elapsed: 28.638505118s
Jan  7 11:23:17.911: INFO: Pod "pod-subpath-test-downwardapi-6p8c": Phase="Running", Reason="", readiness=false. Elapsed: 30.673917559s
Jan  7 11:23:19.931: INFO: Pod "pod-subpath-test-downwardapi-6p8c": Phase="Running", Reason="", readiness=false. Elapsed: 32.693497364s
Jan  7 11:23:22.267: INFO: Pod "pod-subpath-test-downwardapi-6p8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.029279877s
STEP: Saw pod success
Jan  7 11:23:22.267: INFO: Pod "pod-subpath-test-downwardapi-6p8c" satisfied condition "success or failure"
Jan  7 11:23:22.285: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-6p8c container test-container-subpath-downwardapi-6p8c: 
STEP: delete the pod
Jan  7 11:23:22.729: INFO: Waiting for pod pod-subpath-test-downwardapi-6p8c to disappear
Jan  7 11:23:22.743: INFO: Pod pod-subpath-test-downwardapi-6p8c no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-6p8c
Jan  7 11:23:22.743: INFO: Deleting pod "pod-subpath-test-downwardapi-6p8c" in namespace "e2e-tests-subpath-zdmj7"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:23:22.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-zdmj7" for this suite.
Jan  7 11:23:28.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:23:29.011: INFO: namespace: e2e-tests-subpath-zdmj7, resource: bindings, ignored listing per whitelist
Jan  7 11:23:29.031: INFO: namespace e2e-tests-subpath-zdmj7 deletion completed in 6.27858612s

• [SLOW TEST:42.197 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:23:29.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  7 11:23:29.208: INFO: Waiting up to 5m0s for pod "pod-21257ec4-3140-11ea-8b51-0242ac110005" in namespace "e2e-tests-emptydir-lbnqr" to be "success or failure"
Jan  7 11:23:29.226: INFO: Pod "pod-21257ec4-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.967274ms
Jan  7 11:23:31.322: INFO: Pod "pod-21257ec4-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113417925s
Jan  7 11:23:33.345: INFO: Pod "pod-21257ec4-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136203948s
Jan  7 11:23:35.376: INFO: Pod "pod-21257ec4-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167150824s
Jan  7 11:23:37.390: INFO: Pod "pod-21257ec4-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181240957s
Jan  7 11:23:39.407: INFO: Pod "pod-21257ec4-3140-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.198634625s
STEP: Saw pod success
Jan  7 11:23:39.407: INFO: Pod "pod-21257ec4-3140-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:23:39.412: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-21257ec4-3140-11ea-8b51-0242ac110005 container test-container: 
STEP: delete the pod
Jan  7 11:23:39.545: INFO: Waiting for pod pod-21257ec4-3140-11ea-8b51-0242ac110005 to disappear
Jan  7 11:23:40.509: INFO: Pod pod-21257ec4-3140-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:23:40.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-lbnqr" for this suite.
Jan  7 11:23:46.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:23:46.720: INFO: namespace: e2e-tests-emptydir-lbnqr, resource: bindings, ignored listing per whitelist
Jan  7 11:23:46.969: INFO: namespace e2e-tests-emptydir-lbnqr deletion completed in 6.442565775s

• [SLOW TEST:17.938 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:23:46.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-2bd570f4-3140-11ea-8b51-0242ac110005
Jan  7 11:23:47.152: INFO: Pod name my-hostname-basic-2bd570f4-3140-11ea-8b51-0242ac110005: Found 0 pods out of 1
Jan  7 11:23:52.916: INFO: Pod name my-hostname-basic-2bd570f4-3140-11ea-8b51-0242ac110005: Found 1 pods out of 1
Jan  7 11:23:52.917: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-2bd570f4-3140-11ea-8b51-0242ac110005" are running
Jan  7 11:23:56.961: INFO: Pod "my-hostname-basic-2bd570f4-3140-11ea-8b51-0242ac110005-9ksbw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-07 11:23:47 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-07 11:23:47 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2bd570f4-3140-11ea-8b51-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-07 11:23:47 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2bd570f4-3140-11ea-8b51-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-07 11:23:47 +0000 UTC Reason: Message:}])
Jan  7 11:23:56.962: INFO: Trying to dial the pod
Jan  7 11:24:02.033: INFO: Controller my-hostname-basic-2bd570f4-3140-11ea-8b51-0242ac110005: Got expected result from replica 1 [my-hostname-basic-2bd570f4-3140-11ea-8b51-0242ac110005-9ksbw]: "my-hostname-basic-2bd570f4-3140-11ea-8b51-0242ac110005-9ksbw", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:24:02.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-t7xwr" for this suite.
Jan  7 11:24:11.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:24:11.633: INFO: namespace: e2e-tests-replication-controller-t7xwr, resource: bindings, ignored listing per whitelist
Jan  7 11:24:11.682: INFO: namespace e2e-tests-replication-controller-t7xwr deletion completed in 9.572473s

• [SLOW TEST:24.712 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:24:11.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  7 11:24:23.234: INFO: Successfully updated pod "pod-update-activedeadlineseconds-3aff3a10-3140-11ea-8b51-0242ac110005"
Jan  7 11:24:23.235: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-3aff3a10-3140-11ea-8b51-0242ac110005" in namespace "e2e-tests-pods-5x7kz" to be "terminated due to deadline exceeded"
Jan  7 11:24:23.258: INFO: Pod "pod-update-activedeadlineseconds-3aff3a10-3140-11ea-8b51-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 23.746153ms
Jan  7 11:24:25.639: INFO: Pod "pod-update-activedeadlineseconds-3aff3a10-3140-11ea-8b51-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.404053818s
Jan  7 11:24:25.639: INFO: Pod "pod-update-activedeadlineseconds-3aff3a10-3140-11ea-8b51-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:24:25.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-5x7kz" for this suite.
Jan  7 11:24:31.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:24:31.939: INFO: namespace: e2e-tests-pods-5x7kz, resource: bindings, ignored listing per whitelist
Jan  7 11:24:31.994: INFO: namespace e2e-tests-pods-5x7kz deletion completed in 6.332349431s

• [SLOW TEST:20.312 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:24:31.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-46b1834f-3140-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  7 11:24:32.227: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-46b2c11b-3140-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-d4qwg" to be "success or failure"
Jan  7 11:24:32.327: INFO: Pod "pod-projected-configmaps-46b2c11b-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 98.946935ms
Jan  7 11:24:34.567: INFO: Pod "pod-projected-configmaps-46b2c11b-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.338933042s
Jan  7 11:24:36.585: INFO: Pod "pod-projected-configmaps-46b2c11b-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.357466891s
Jan  7 11:24:38.804: INFO: Pod "pod-projected-configmaps-46b2c11b-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576453959s
Jan  7 11:24:40.818: INFO: Pod "pod-projected-configmaps-46b2c11b-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.590165397s
Jan  7 11:24:43.985: INFO: Pod "pod-projected-configmaps-46b2c11b-3140-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.757498143s
STEP: Saw pod success
Jan  7 11:24:43.986: INFO: Pod "pod-projected-configmaps-46b2c11b-3140-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:24:44.002: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-46b2c11b-3140-11ea-8b51-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  7 11:24:44.581: INFO: Waiting for pod pod-projected-configmaps-46b2c11b-3140-11ea-8b51-0242ac110005 to disappear
Jan  7 11:24:44.614: INFO: Pod pod-projected-configmaps-46b2c11b-3140-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:24:44.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-d4qwg" for this suite.
Jan  7 11:24:50.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:24:50.840: INFO: namespace: e2e-tests-projected-d4qwg, resource: bindings, ignored listing per whitelist
Jan  7 11:24:50.931: INFO: namespace e2e-tests-projected-d4qwg deletion completed in 6.298631125s

• [SLOW TEST:18.937 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:24:50.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jan  7 11:24:51.227: INFO: Waiting up to 5m0s for pod "var-expansion-5203d74a-3140-11ea-8b51-0242ac110005" in namespace "e2e-tests-var-expansion-zhrnx" to be "success or failure"
Jan  7 11:24:51.305: INFO: Pod "var-expansion-5203d74a-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 77.430026ms
Jan  7 11:24:53.456: INFO: Pod "var-expansion-5203d74a-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228801316s
Jan  7 11:24:55.469: INFO: Pod "var-expansion-5203d74a-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.242218646s
Jan  7 11:24:57.854: INFO: Pod "var-expansion-5203d74a-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.62662203s
Jan  7 11:25:00.045: INFO: Pod "var-expansion-5203d74a-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.817959287s
Jan  7 11:25:02.065: INFO: Pod "var-expansion-5203d74a-3140-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.837841975s
STEP: Saw pod success
Jan  7 11:25:02.065: INFO: Pod "var-expansion-5203d74a-3140-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:25:02.073: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-5203d74a-3140-11ea-8b51-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  7 11:25:02.238: INFO: Waiting for pod var-expansion-5203d74a-3140-11ea-8b51-0242ac110005 to disappear
Jan  7 11:25:02.265: INFO: Pod var-expansion-5203d74a-3140-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:25:02.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-zhrnx" for this suite.
Jan  7 11:25:08.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:25:08.562: INFO: namespace: e2e-tests-var-expansion-zhrnx, resource: bindings, ignored listing per whitelist
Jan  7 11:25:08.594: INFO: namespace e2e-tests-var-expansion-zhrnx deletion completed in 6.31644211s

• [SLOW TEST:17.662 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:25:08.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-x4mwx
Jan  7 11:25:19.070: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-x4mwx
STEP: checking the pod's current state and verifying that restartCount is present
Jan  7 11:25:19.087: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:29:20.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-x4mwx" for this suite.
Jan  7 11:29:26.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:29:27.212: INFO: namespace: e2e-tests-container-probe-x4mwx, resource: bindings, ignored listing per whitelist
Jan  7 11:29:27.219: INFO: namespace e2e-tests-container-probe-x4mwx deletion completed in 6.355020078s

• [SLOW TEST:258.624 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:29:27.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-f69de3fb-3140-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  7 11:29:27.423: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f6a82f86-3140-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-s9hk6" to be "success or failure"
Jan  7 11:29:27.448: INFO: Pod "pod-projected-configmaps-f6a82f86-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.939437ms
Jan  7 11:29:29.647: INFO: Pod "pod-projected-configmaps-f6a82f86-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223611215s
Jan  7 11:29:31.666: INFO: Pod "pod-projected-configmaps-f6a82f86-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.242957156s
Jan  7 11:29:33.691: INFO: Pod "pod-projected-configmaps-f6a82f86-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.267833941s
Jan  7 11:29:35.708: INFO: Pod "pod-projected-configmaps-f6a82f86-3140-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.284989461s
Jan  7 11:29:37.746: INFO: Pod "pod-projected-configmaps-f6a82f86-3140-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.323175235s
STEP: Saw pod success
Jan  7 11:29:37.747: INFO: Pod "pod-projected-configmaps-f6a82f86-3140-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:29:37.754: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f6a82f86-3140-11ea-8b51-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  7 11:29:38.222: INFO: Waiting for pod pod-projected-configmaps-f6a82f86-3140-11ea-8b51-0242ac110005 to disappear
Jan  7 11:29:38.415: INFO: Pod pod-projected-configmaps-f6a82f86-3140-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:29:38.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-s9hk6" for this suite.
Jan  7 11:29:44.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:29:44.517: INFO: namespace: e2e-tests-projected-s9hk6, resource: bindings, ignored listing per whitelist
Jan  7 11:29:44.675: INFO: namespace e2e-tests-projected-s9hk6 deletion completed in 6.247515306s

• [SLOW TEST:17.455 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:29:44.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-0107a4a7-3141-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  7 11:29:44.831: INFO: Waiting up to 5m0s for pod "pod-configmaps-01085185-3141-11ea-8b51-0242ac110005" in namespace "e2e-tests-configmap-bvcbx" to be "success or failure"
Jan  7 11:29:44.892: INFO: Pod "pod-configmaps-01085185-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 60.198313ms
Jan  7 11:29:46.925: INFO: Pod "pod-configmaps-01085185-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0932475s
Jan  7 11:29:48.971: INFO: Pod "pod-configmaps-01085185-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139076995s
Jan  7 11:29:51.318: INFO: Pod "pod-configmaps-01085185-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.486036469s
Jan  7 11:29:54.051: INFO: Pod "pod-configmaps-01085185-3141-11ea-8b51-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 9.219509962s
Jan  7 11:29:56.071: INFO: Pod "pod-configmaps-01085185-3141-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.238975454s
STEP: Saw pod success
Jan  7 11:29:56.071: INFO: Pod "pod-configmaps-01085185-3141-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:29:56.079: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-01085185-3141-11ea-8b51-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  7 11:29:56.947: INFO: Waiting for pod pod-configmaps-01085185-3141-11ea-8b51-0242ac110005 to disappear
Jan  7 11:29:57.007: INFO: Pod pod-configmaps-01085185-3141-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:29:57.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-bvcbx" for this suite.
Jan  7 11:30:03.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:30:03.195: INFO: namespace: e2e-tests-configmap-bvcbx, resource: bindings, ignored listing per whitelist
Jan  7 11:30:03.266: INFO: namespace e2e-tests-configmap-bvcbx deletion completed in 6.243228942s

• [SLOW TEST:18.590 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:30:03.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-jrz8t
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  7 11:30:03.495: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  7 11:30:43.956: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-jrz8t PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 11:30:43.956: INFO: >>> kubeConfig: /root/.kube/config
I0107 11:30:44.046777       8 log.go:172] (0xc00218a2c0) (0xc002636000) Create stream
I0107 11:30:44.047006       8 log.go:172] (0xc00218a2c0) (0xc002636000) Stream added, broadcasting: 1
I0107 11:30:44.051241       8 log.go:172] (0xc00218a2c0) Reply frame received for 1
I0107 11:30:44.051295       8 log.go:172] (0xc00218a2c0) (0xc001f3cfa0) Create stream
I0107 11:30:44.051314       8 log.go:172] (0xc00218a2c0) (0xc001f3cfa0) Stream added, broadcasting: 3
I0107 11:30:44.053297       8 log.go:172] (0xc00218a2c0) Reply frame received for 3
I0107 11:30:44.053324       8 log.go:172] (0xc00218a2c0) (0xc002636140) Create stream
I0107 11:30:44.053339       8 log.go:172] (0xc00218a2c0) (0xc002636140) Stream added, broadcasting: 5
I0107 11:30:44.054731       8 log.go:172] (0xc00218a2c0) Reply frame received for 5
I0107 11:30:44.346290       8 log.go:172] (0xc00218a2c0) Data frame received for 3
I0107 11:30:44.346360       8 log.go:172] (0xc001f3cfa0) (3) Data frame handling
I0107 11:30:44.346384       8 log.go:172] (0xc001f3cfa0) (3) Data frame sent
I0107 11:30:44.540909       8 log.go:172] (0xc00218a2c0) Data frame received for 1
I0107 11:30:44.541117       8 log.go:172] (0xc00218a2c0) (0xc001f3cfa0) Stream removed, broadcasting: 3
I0107 11:30:44.541264       8 log.go:172] (0xc002636000) (1) Data frame handling
I0107 11:30:44.541298       8 log.go:172] (0xc002636000) (1) Data frame sent
I0107 11:30:44.541335       8 log.go:172] (0xc00218a2c0) (0xc002636000) Stream removed, broadcasting: 1
I0107 11:30:44.542419       8 log.go:172] (0xc00218a2c0) (0xc002636140) Stream removed, broadcasting: 5
I0107 11:30:44.542619       8 log.go:172] (0xc00218a2c0) Go away received
I0107 11:30:44.542713       8 log.go:172] (0xc00218a2c0) (0xc002636000) Stream removed, broadcasting: 1
I0107 11:30:44.542737       8 log.go:172] (0xc00218a2c0) (0xc001f3cfa0) Stream removed, broadcasting: 3
I0107 11:30:44.542749       8 log.go:172] (0xc00218a2c0) (0xc002636140) Stream removed, broadcasting: 5
Jan  7 11:30:44.543: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:30:44.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-jrz8t" for this suite.
Jan  7 11:31:12.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:31:12.649: INFO: namespace: e2e-tests-pod-network-test-jrz8t, resource: bindings, ignored listing per whitelist
Jan  7 11:31:12.738: INFO: namespace e2e-tests-pod-network-test-jrz8t deletion completed in 28.17676749s

• [SLOW TEST:69.471 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:31:12.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-rw9xn
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  7 11:31:12.882: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  7 11:31:47.202: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-rw9xn PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 11:31:47.203: INFO: >>> kubeConfig: /root/.kube/config
I0107 11:31:47.291597       8 log.go:172] (0xc0001e3340) (0xc0022d2fa0) Create stream
I0107 11:31:47.291712       8 log.go:172] (0xc0001e3340) (0xc0022d2fa0) Stream added, broadcasting: 1
I0107 11:31:47.302837       8 log.go:172] (0xc0001e3340) Reply frame received for 1
I0107 11:31:47.302932       8 log.go:172] (0xc0001e3340) (0xc0015066e0) Create stream
I0107 11:31:47.302958       8 log.go:172] (0xc0001e3340) (0xc0015066e0) Stream added, broadcasting: 3
I0107 11:31:47.306215       8 log.go:172] (0xc0001e3340) Reply frame received for 3
I0107 11:31:47.306423       8 log.go:172] (0xc0001e3340) (0xc001506820) Create stream
I0107 11:31:47.306452       8 log.go:172] (0xc0001e3340) (0xc001506820) Stream added, broadcasting: 5
I0107 11:31:47.308609       8 log.go:172] (0xc0001e3340) Reply frame received for 5
I0107 11:31:47.594727       8 log.go:172] (0xc0001e3340) Data frame received for 3
I0107 11:31:47.594989       8 log.go:172] (0xc0015066e0) (3) Data frame handling
I0107 11:31:47.595043       8 log.go:172] (0xc0015066e0) (3) Data frame sent
I0107 11:31:47.762252       8 log.go:172] (0xc0001e3340) Data frame received for 1
I0107 11:31:47.762424       8 log.go:172] (0xc0001e3340) (0xc0015066e0) Stream removed, broadcasting: 3
I0107 11:31:47.762541       8 log.go:172] (0xc0022d2fa0) (1) Data frame handling
I0107 11:31:47.762624       8 log.go:172] (0xc0022d2fa0) (1) Data frame sent
I0107 11:31:47.762804       8 log.go:172] (0xc0001e3340) (0xc001506820) Stream removed, broadcasting: 5
I0107 11:31:47.763044       8 log.go:172] (0xc0001e3340) (0xc0022d2fa0) Stream removed, broadcasting: 1
I0107 11:31:47.763089       8 log.go:172] (0xc0001e3340) Go away received
I0107 11:31:47.763592       8 log.go:172] (0xc0001e3340) (0xc0022d2fa0) Stream removed, broadcasting: 1
I0107 11:31:47.763652       8 log.go:172] (0xc0001e3340) (0xc0015066e0) Stream removed, broadcasting: 3
I0107 11:31:47.763675       8 log.go:172] (0xc0001e3340) (0xc001506820) Stream removed, broadcasting: 5
Jan  7 11:31:47.764: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:31:47.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-rw9xn" for this suite.
Jan  7 11:32:11.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:32:12.013: INFO: namespace: e2e-tests-pod-network-test-rw9xn, resource: bindings, ignored listing per whitelist
Jan  7 11:32:12.072: INFO: namespace e2e-tests-pod-network-test-rw9xn deletion completed in 24.265801774s

• [SLOW TEST:59.333 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:32:12.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  7 11:32:12.283: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:32:29.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-bk72d" for this suite.
Jan  7 11:32:37.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:32:37.440: INFO: namespace: e2e-tests-init-container-bk72d, resource: bindings, ignored listing per whitelist
Jan  7 11:32:37.516: INFO: namespace e2e-tests-init-container-bk72d deletion completed in 8.260911994s

• [SLOW TEST:25.444 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:32:37.516: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  7 11:32:37.766: INFO: Waiting up to 5m0s for pod "pod-681b470e-3141-11ea-8b51-0242ac110005" in namespace "e2e-tests-emptydir-gtprr" to be "success or failure"
Jan  7 11:32:37.792: INFO: Pod "pod-681b470e-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.408575ms
Jan  7 11:32:39.967: INFO: Pod "pod-681b470e-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201279622s
Jan  7 11:32:42.049: INFO: Pod "pod-681b470e-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.283454957s
Jan  7 11:32:44.285: INFO: Pod "pod-681b470e-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.518978129s
Jan  7 11:32:46.311: INFO: Pod "pod-681b470e-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.54519428s
Jan  7 11:32:48.349: INFO: Pod "pod-681b470e-3141-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.583399551s
STEP: Saw pod success
Jan  7 11:32:48.350: INFO: Pod "pod-681b470e-3141-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:32:48.378: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-681b470e-3141-11ea-8b51-0242ac110005 container test-container: 
STEP: delete the pod
Jan  7 11:32:48.547: INFO: Waiting for pod pod-681b470e-3141-11ea-8b51-0242ac110005 to disappear
Jan  7 11:32:48.618: INFO: Pod pod-681b470e-3141-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:32:48.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-gtprr" for this suite.
Jan  7 11:32:54.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:32:54.830: INFO: namespace: e2e-tests-emptydir-gtprr, resource: bindings, ignored listing per whitelist
Jan  7 11:32:54.861: INFO: namespace e2e-tests-emptydir-gtprr deletion completed in 6.221595104s

• [SLOW TEST:17.345 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:32:54.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  7 11:32:55.144: INFO: Waiting up to 5m0s for pod "downwardapi-volume-727641fa-3141-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-4h2sc" to be "success or failure"
Jan  7 11:32:55.161: INFO: Pod "downwardapi-volume-727641fa-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.004479ms
Jan  7 11:32:57.183: INFO: Pod "downwardapi-volume-727641fa-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038410771s
Jan  7 11:32:59.218: INFO: Pod "downwardapi-volume-727641fa-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073341744s
Jan  7 11:33:01.228: INFO: Pod "downwardapi-volume-727641fa-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084035273s
Jan  7 11:33:03.246: INFO: Pod "downwardapi-volume-727641fa-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.102189231s
Jan  7 11:33:05.258: INFO: Pod "downwardapi-volume-727641fa-3141-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.113442753s
STEP: Saw pod success
Jan  7 11:33:05.258: INFO: Pod "downwardapi-volume-727641fa-3141-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:33:05.524: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-727641fa-3141-11ea-8b51-0242ac110005 container client-container: 
STEP: delete the pod
Jan  7 11:33:05.796: INFO: Waiting for pod downwardapi-volume-727641fa-3141-11ea-8b51-0242ac110005 to disappear
Jan  7 11:33:05.804: INFO: Pod downwardapi-volume-727641fa-3141-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:33:05.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4h2sc" for this suite.
Jan  7 11:33:11.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:33:12.168: INFO: namespace: e2e-tests-projected-4h2sc, resource: bindings, ignored listing per whitelist
Jan  7 11:33:12.190: INFO: namespace e2e-tests-projected-4h2sc deletion completed in 6.373998173s

• [SLOW TEST:17.328 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:33:12.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:33:12.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-67v67" for this suite.
Jan  7 11:33:18.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:33:18.873: INFO: namespace: e2e-tests-kubelet-test-67v67, resource: bindings, ignored listing per whitelist
Jan  7 11:33:18.899: INFO: namespace e2e-tests-kubelet-test-67v67 deletion completed in 6.455210831s

• [SLOW TEST:6.708 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:33:18.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-80cd0af2-3141-11ea-8b51-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-80cd0af2-3141-11ea-8b51-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:34:49.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-cxdgr" for this suite.
Jan  7 11:35:13.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:35:13.734: INFO: namespace: e2e-tests-configmap-cxdgr, resource: bindings, ignored listing per whitelist
Jan  7 11:35:13.811: INFO: namespace e2e-tests-configmap-cxdgr deletion completed in 24.309454743s

• [SLOW TEST:114.912 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:35:13.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-c562aa09-3141-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  7 11:35:14.395: INFO: Waiting up to 5m0s for pod "pod-configmaps-c566f1f5-3141-11ea-8b51-0242ac110005" in namespace "e2e-tests-configmap-d2dfs" to be "success or failure"
Jan  7 11:35:14.416: INFO: Pod "pod-configmaps-c566f1f5-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.373392ms
Jan  7 11:35:16.432: INFO: Pod "pod-configmaps-c566f1f5-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03648521s
Jan  7 11:35:18.450: INFO: Pod "pod-configmaps-c566f1f5-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054837857s
Jan  7 11:35:21.492: INFO: Pod "pod-configmaps-c566f1f5-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.097325763s
Jan  7 11:35:23.516: INFO: Pod "pod-configmaps-c566f1f5-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.120838707s
Jan  7 11:35:25.528: INFO: Pod "pod-configmaps-c566f1f5-3141-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.132751043s
STEP: Saw pod success
Jan  7 11:35:25.528: INFO: Pod "pod-configmaps-c566f1f5-3141-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:35:25.532: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c566f1f5-3141-11ea-8b51-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  7 11:35:26.852: INFO: Waiting for pod pod-configmaps-c566f1f5-3141-11ea-8b51-0242ac110005 to disappear
Jan  7 11:35:26.881: INFO: Pod pod-configmaps-c566f1f5-3141-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:35:26.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-d2dfs" for this suite.
Jan  7 11:35:33.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:35:33.133: INFO: namespace: e2e-tests-configmap-d2dfs, resource: bindings, ignored listing per whitelist
Jan  7 11:35:33.197: INFO: namespace e2e-tests-configmap-d2dfs deletion completed in 6.3060729s

• [SLOW TEST:19.386 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:35:33.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-d0c1d8a8-3141-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  7 11:35:33.340: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d0c2ed45-3141-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-s9557" to be "success or failure"
Jan  7 11:35:33.433: INFO: Pod "pod-projected-configmaps-d0c2ed45-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 93.320255ms
Jan  7 11:35:35.446: INFO: Pod "pod-projected-configmaps-d0c2ed45-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106411062s
Jan  7 11:35:37.460: INFO: Pod "pod-projected-configmaps-d0c2ed45-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120467318s
Jan  7 11:35:39.953: INFO: Pod "pod-projected-configmaps-d0c2ed45-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.61296583s
Jan  7 11:35:41.968: INFO: Pod "pod-projected-configmaps-d0c2ed45-3141-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.628608383s
Jan  7 11:35:43.987: INFO: Pod "pod-projected-configmaps-d0c2ed45-3141-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.647605553s
STEP: Saw pod success
Jan  7 11:35:43.988: INFO: Pod "pod-projected-configmaps-d0c2ed45-3141-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:35:43.995: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d0c2ed45-3141-11ea-8b51-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  7 11:35:44.763: INFO: Waiting for pod pod-projected-configmaps-d0c2ed45-3141-11ea-8b51-0242ac110005 to disappear
Jan  7 11:35:44.776: INFO: Pod pod-projected-configmaps-d0c2ed45-3141-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:35:44.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-s9557" for this suite.
Jan  7 11:35:50.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:35:50.963: INFO: namespace: e2e-tests-projected-s9557, resource: bindings, ignored listing per whitelist
Jan  7 11:35:50.989: INFO: namespace e2e-tests-projected-s9557 deletion completed in 6.205021474s

• [SLOW TEST:17.791 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:35:50.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-2xzqk
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-2xzqk to expose endpoints map[]
Jan  7 11:35:51.234: INFO: Get endpoints failed (6.495353ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan  7 11:35:52.243: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-2xzqk exposes endpoints map[] (1.015740007s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-2xzqk
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-2xzqk to expose endpoints map[pod1:[100]]
Jan  7 11:35:57.567: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.242037077s elapsed, will retry)
Jan  7 11:36:02.621: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-2xzqk exposes endpoints map[pod1:[100]] (10.295482327s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-2xzqk
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-2xzqk to expose endpoints map[pod1:[100] pod2:[101]]
Jan  7 11:36:07.307: INFO: Unexpected endpoints: found map[dc1351ee-3141-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.669926897s elapsed, will retry)
Jan  7 11:36:12.239: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-2xzqk exposes endpoints map[pod2:[101] pod1:[100]] (9.601697373s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-2xzqk
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-2xzqk to expose endpoints map[pod2:[101]]
Jan  7 11:36:13.796: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-2xzqk exposes endpoints map[pod2:[101]] (1.528166912s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-2xzqk
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-2xzqk to expose endpoints map[]
Jan  7 11:36:14.844: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-2xzqk exposes endpoints map[] (1.026296226s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:36:15.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-2xzqk" for this suite.
Jan  7 11:36:39.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:36:39.213: INFO: namespace: e2e-tests-services-2xzqk, resource: bindings, ignored listing per whitelist
Jan  7 11:36:39.232: INFO: namespace e2e-tests-services-2xzqk deletion completed in 24.201304751s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:48.243 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:36:39.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  7 11:36:52.389: INFO: Successfully updated pod "annotationupdatef84a0966-3141-11ea-8b51-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:36:54.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5s4s9" for this suite.
Jan  7 11:37:18.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:37:18.829: INFO: namespace: e2e-tests-downward-api-5s4s9, resource: bindings, ignored listing per whitelist
Jan  7 11:37:18.865: INFO: namespace e2e-tests-downward-api-5s4s9 deletion completed in 24.169713903s

• [SLOW TEST:39.633 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:37:18.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-0fd8a112-3142-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  7 11:37:19.180: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0fd96d9a-3142-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-p6f5v" to be "success or failure"
Jan  7 11:37:19.189: INFO: Pod "pod-projected-secrets-0fd96d9a-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.039619ms
Jan  7 11:37:21.442: INFO: Pod "pod-projected-secrets-0fd96d9a-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.261659388s
Jan  7 11:37:23.478: INFO: Pod "pod-projected-secrets-0fd96d9a-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.297924339s
Jan  7 11:37:25.577: INFO: Pod "pod-projected-secrets-0fd96d9a-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.396993286s
Jan  7 11:37:27.603: INFO: Pod "pod-projected-secrets-0fd96d9a-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.422378464s
Jan  7 11:37:29.625: INFO: Pod "pod-projected-secrets-0fd96d9a-3142-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.444774132s
STEP: Saw pod success
Jan  7 11:37:29.625: INFO: Pod "pod-projected-secrets-0fd96d9a-3142-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:37:29.636: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-0fd96d9a-3142-11ea-8b51-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  7 11:37:30.262: INFO: Waiting for pod pod-projected-secrets-0fd96d9a-3142-11ea-8b51-0242ac110005 to disappear
Jan  7 11:37:30.298: INFO: Pod pod-projected-secrets-0fd96d9a-3142-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:37:30.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-p6f5v" for this suite.
Jan  7 11:37:36.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:37:36.455: INFO: namespace: e2e-tests-projected-p6f5v, resource: bindings, ignored listing per whitelist
Jan  7 11:37:36.668: INFO: namespace e2e-tests-projected-p6f5v deletion completed in 6.357043328s

• [SLOW TEST:17.802 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:37:36.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  7 11:37:36.879: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a6500fd-3142-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-svjh5" to be "success or failure"
Jan  7 11:37:36.888: INFO: Pod "downwardapi-volume-1a6500fd-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.534154ms
Jan  7 11:37:39.076: INFO: Pod "downwardapi-volume-1a6500fd-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196753891s
Jan  7 11:37:41.151: INFO: Pod "downwardapi-volume-1a6500fd-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.272768699s
Jan  7 11:37:43.779: INFO: Pod "downwardapi-volume-1a6500fd-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.90026364s
Jan  7 11:37:45.807: INFO: Pod "downwardapi-volume-1a6500fd-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.927865659s
Jan  7 11:37:47.822: INFO: Pod "downwardapi-volume-1a6500fd-3142-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.943045067s
STEP: Saw pod success
Jan  7 11:37:47.823: INFO: Pod "downwardapi-volume-1a6500fd-3142-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:37:47.832: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1a6500fd-3142-11ea-8b51-0242ac110005 container client-container: 
STEP: delete the pod
Jan  7 11:37:47.984: INFO: Waiting for pod downwardapi-volume-1a6500fd-3142-11ea-8b51-0242ac110005 to disappear
Jan  7 11:37:47.997: INFO: Pod downwardapi-volume-1a6500fd-3142-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:37:47.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-svjh5" for this suite.
Jan  7 11:37:54.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:37:54.269: INFO: namespace: e2e-tests-projected-svjh5, resource: bindings, ignored listing per whitelist
Jan  7 11:37:54.350: INFO: namespace e2e-tests-projected-svjh5 deletion completed in 6.337273975s

• [SLOW TEST:17.682 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:37:54.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jan  7 11:37:54.652: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:37:54.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-44q4x" for this suite.
Jan  7 11:38:00.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:38:01.017: INFO: namespace: e2e-tests-kubectl-44q4x, resource: bindings, ignored listing per whitelist
Jan  7 11:38:01.062: INFO: namespace e2e-tests-kubectl-44q4x deletion completed in 6.24645329s

• [SLOW TEST:6.712 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:38:01.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:38:08.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-ggmjn" for this suite.
Jan  7 11:38:14.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:38:14.223: INFO: namespace: e2e-tests-namespaces-ggmjn, resource: bindings, ignored listing per whitelist
Jan  7 11:38:14.325: INFO: namespace e2e-tests-namespaces-ggmjn deletion completed in 6.280078967s
STEP: Destroying namespace "e2e-tests-nsdeletetest-tsrd6" for this suite.
Jan  7 11:38:14.330: INFO: Namespace e2e-tests-nsdeletetest-tsrd6 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-fftdt" for this suite.
Jan  7 11:38:20.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:38:20.605: INFO: namespace: e2e-tests-nsdeletetest-fftdt, resource: bindings, ignored listing per whitelist
Jan  7 11:38:20.621: INFO: namespace e2e-tests-nsdeletetest-fftdt deletion completed in 6.291410662s

• [SLOW TEST:19.558 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:38:20.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-34a4111d-3142-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  7 11:38:20.994: INFO: Waiting up to 5m0s for pod "pod-configmaps-34aeb765-3142-11ea-8b51-0242ac110005" in namespace "e2e-tests-configmap-54pg5" to be "success or failure"
Jan  7 11:38:21.013: INFO: Pod "pod-configmaps-34aeb765-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.452904ms
Jan  7 11:38:23.084: INFO: Pod "pod-configmaps-34aeb765-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089629525s
Jan  7 11:38:25.101: INFO: Pod "pod-configmaps-34aeb765-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107038984s
Jan  7 11:38:27.513: INFO: Pod "pod-configmaps-34aeb765-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.519416691s
Jan  7 11:38:29.526: INFO: Pod "pod-configmaps-34aeb765-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.53150834s
Jan  7 11:38:31.538: INFO: Pod "pod-configmaps-34aeb765-3142-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.544302238s
STEP: Saw pod success
Jan  7 11:38:31.539: INFO: Pod "pod-configmaps-34aeb765-3142-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:38:31.544: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-34aeb765-3142-11ea-8b51-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  7 11:38:32.680: INFO: Waiting for pod pod-configmaps-34aeb765-3142-11ea-8b51-0242ac110005 to disappear
Jan  7 11:38:33.287: INFO: Pod pod-configmaps-34aeb765-3142-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:38:33.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-54pg5" for this suite.
Jan  7 11:38:39.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:38:39.896: INFO: namespace: e2e-tests-configmap-54pg5, resource: bindings, ignored listing per whitelist
Jan  7 11:38:39.904: INFO: namespace e2e-tests-configmap-54pg5 deletion completed in 6.596954857s

• [SLOW TEST:19.282 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:38:39.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-4r4w5/secret-test-401dd1d3-3142-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  7 11:38:40.276: INFO: Waiting up to 5m0s for pod "pod-configmaps-402cda91-3142-11ea-8b51-0242ac110005" in namespace "e2e-tests-secrets-4r4w5" to be "success or failure"
Jan  7 11:38:40.291: INFO: Pod "pod-configmaps-402cda91-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.275604ms
Jan  7 11:38:42.325: INFO: Pod "pod-configmaps-402cda91-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048488195s
Jan  7 11:38:44.337: INFO: Pod "pod-configmaps-402cda91-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060771475s
Jan  7 11:38:47.046: INFO: Pod "pod-configmaps-402cda91-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.769827202s
Jan  7 11:38:49.087: INFO: Pod "pod-configmaps-402cda91-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.810733943s
Jan  7 11:38:51.099: INFO: Pod "pod-configmaps-402cda91-3142-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.822358932s
STEP: Saw pod success
Jan  7 11:38:51.099: INFO: Pod "pod-configmaps-402cda91-3142-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:38:51.103: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-402cda91-3142-11ea-8b51-0242ac110005 container env-test: 
STEP: delete the pod
Jan  7 11:38:51.160: INFO: Waiting for pod pod-configmaps-402cda91-3142-11ea-8b51-0242ac110005 to disappear
Jan  7 11:38:51.332: INFO: Pod pod-configmaps-402cda91-3142-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:38:51.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4r4w5" for this suite.
Jan  7 11:38:57.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:38:57.501: INFO: namespace: e2e-tests-secrets-4r4w5, resource: bindings, ignored listing per whitelist
Jan  7 11:38:57.531: INFO: namespace e2e-tests-secrets-4r4w5 deletion completed in 6.181259119s

• [SLOW TEST:17.626 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:38:57.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-4bb6v
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-4bb6v
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-4bb6v
Jan  7 11:38:57.901: INFO: Found 0 stateful pods, waiting for 1
Jan  7 11:39:07.919: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan  7 11:39:07.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4bb6v ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  7 11:39:08.643: INFO: stderr: "I0107 11:39:08.171229    1597 log.go:172] (0xc000718370) (0xc0005d5360) Create stream\nI0107 11:39:08.171447    1597 log.go:172] (0xc000718370) (0xc0005d5360) Stream added, broadcasting: 1\nI0107 11:39:08.179690    1597 log.go:172] (0xc000718370) Reply frame received for 1\nI0107 11:39:08.179774    1597 log.go:172] (0xc000718370) (0xc0007b6000) Create stream\nI0107 11:39:08.179797    1597 log.go:172] (0xc000718370) (0xc0007b6000) Stream added, broadcasting: 3\nI0107 11:39:08.181597    1597 log.go:172] (0xc000718370) Reply frame received for 3\nI0107 11:39:08.181638    1597 log.go:172] (0xc000718370) (0xc0003b4000) Create stream\nI0107 11:39:08.181650    1597 log.go:172] (0xc000718370) (0xc0003b4000) Stream added, broadcasting: 5\nI0107 11:39:08.183876    1597 log.go:172] (0xc000718370) Reply frame received for 5\nI0107 11:39:08.432004    1597 log.go:172] (0xc000718370) Data frame received for 3\nI0107 11:39:08.432095    1597 log.go:172] (0xc0007b6000) (3) Data frame handling\nI0107 11:39:08.432132    1597 log.go:172] (0xc0007b6000) (3) Data frame sent\nI0107 11:39:08.630384    1597 log.go:172] (0xc000718370) (0xc0007b6000) Stream removed, broadcasting: 3\nI0107 11:39:08.630535    1597 log.go:172] (0xc000718370) Data frame received for 1\nI0107 11:39:08.630577    1597 log.go:172] (0xc0005d5360) (1) Data frame handling\nI0107 11:39:08.630597    1597 log.go:172] (0xc0005d5360) (1) Data frame sent\nI0107 11:39:08.630615    1597 log.go:172] (0xc000718370) (0xc0005d5360) Stream removed, broadcasting: 1\nI0107 11:39:08.630648    1597 log.go:172] (0xc000718370) (0xc0003b4000) Stream removed, broadcasting: 5\nI0107 11:39:08.630826    1597 log.go:172] (0xc000718370) Go away received\nI0107 11:39:08.631299    1597 log.go:172] (0xc000718370) (0xc0005d5360) Stream removed, broadcasting: 1\nI0107 11:39:08.631316    1597 log.go:172] (0xc000718370) (0xc0007b6000) Stream removed, broadcasting: 3\nI0107 11:39:08.631327    1597 log.go:172] (0xc000718370) (0xc0003b4000) Stream removed, broadcasting: 5\n"
Jan  7 11:39:08.643: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  7 11:39:08.643: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  7 11:39:08.663: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  7 11:39:18.693: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  7 11:39:18.693: INFO: Waiting for statefulset status.replicas updated to 0
Jan  7 11:39:18.723: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999088s
Jan  7 11:39:19.809: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990661922s
Jan  7 11:39:20.836: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.904344933s
Jan  7 11:39:21.856: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.877125294s
Jan  7 11:39:22.880: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.856982186s
Jan  7 11:39:23.899: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.832979699s
Jan  7 11:39:24.918: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.814325326s
Jan  7 11:39:25.938: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.794741504s
Jan  7 11:39:26.961: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.77580115s
Jan  7 11:39:27.992: INFO: Verifying statefulset ss doesn't scale past 1 for another 752.867362ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-4bb6v
Jan  7 11:39:29.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4bb6v ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  7 11:39:29.551: INFO: stderr: "I0107 11:39:29.255992    1619 log.go:172] (0xc00072c2c0) (0xc0007565a0) Create stream\nI0107 11:39:29.256272    1619 log.go:172] (0xc00072c2c0) (0xc0007565a0) Stream added, broadcasting: 1\nI0107 11:39:29.262398    1619 log.go:172] (0xc00072c2c0) Reply frame received for 1\nI0107 11:39:29.262432    1619 log.go:172] (0xc00072c2c0) (0xc000756640) Create stream\nI0107 11:39:29.262439    1619 log.go:172] (0xc00072c2c0) (0xc000756640) Stream added, broadcasting: 3\nI0107 11:39:29.263559    1619 log.go:172] (0xc00072c2c0) Reply frame received for 3\nI0107 11:39:29.263587    1619 log.go:172] (0xc00072c2c0) (0xc0005b4c80) Create stream\nI0107 11:39:29.263594    1619 log.go:172] (0xc00072c2c0) (0xc0005b4c80) Stream added, broadcasting: 5\nI0107 11:39:29.264790    1619 log.go:172] (0xc00072c2c0) Reply frame received for 5\nI0107 11:39:29.375653    1619 log.go:172] (0xc00072c2c0) Data frame received for 3\nI0107 11:39:29.376371    1619 log.go:172] (0xc000756640) (3) Data frame handling\nI0107 11:39:29.376681    1619 log.go:172] (0xc000756640) (3) Data frame sent\nI0107 11:39:29.533739    1619 log.go:172] (0xc00072c2c0) Data frame received for 1\nI0107 11:39:29.533918    1619 log.go:172] (0xc0007565a0) (1) Data frame handling\nI0107 11:39:29.533957    1619 log.go:172] (0xc0007565a0) (1) Data frame sent\nI0107 11:39:29.533999    1619 log.go:172] (0xc00072c2c0) (0xc0007565a0) Stream removed, broadcasting: 1\nI0107 11:39:29.534492    1619 log.go:172] (0xc00072c2c0) (0xc000756640) Stream removed, broadcasting: 3\nI0107 11:39:29.534809    1619 log.go:172] (0xc00072c2c0) (0xc0005b4c80) Stream removed, broadcasting: 5\nI0107 11:39:29.534896    1619 log.go:172] (0xc00072c2c0) (0xc0007565a0) Stream removed, broadcasting: 1\nI0107 11:39:29.534922    1619 log.go:172] (0xc00072c2c0) (0xc000756640) Stream removed, broadcasting: 3\nI0107 11:39:29.534943    1619 log.go:172] (0xc00072c2c0) (0xc0005b4c80) Stream removed, broadcasting: 5\nI0107 11:39:29.535196    1619 log.go:172] (0xc00072c2c0) Go away received\n"
Jan  7 11:39:29.551: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  7 11:39:29.551: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  7 11:39:29.570: INFO: Found 1 stateful pods, waiting for 3
Jan  7 11:39:39.597: INFO: Found 2 stateful pods, waiting for 3
Jan  7 11:39:49.587: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 11:39:49.587: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 11:39:49.587: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false
Jan  7 11:39:59.591: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 11:39:59.591: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 11:39:59.591: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan  7 11:39:59.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4bb6v ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  7 11:40:00.470: INFO: stderr: "I0107 11:39:59.977806    1641 log.go:172] (0xc00071e160) (0xc0005d21e0) Create stream\nI0107 11:39:59.978344    1641 log.go:172] (0xc00071e160) (0xc0005d21e0) Stream added, broadcasting: 1\nI0107 11:39:59.993668    1641 log.go:172] (0xc00071e160) Reply frame received for 1\nI0107 11:39:59.993862    1641 log.go:172] (0xc00071e160) (0xc0005d2280) Create stream\nI0107 11:39:59.993886    1641 log.go:172] (0xc00071e160) (0xc0005d2280) Stream added, broadcasting: 3\nI0107 11:39:59.996094    1641 log.go:172] (0xc00071e160) Reply frame received for 3\nI0107 11:39:59.996134    1641 log.go:172] (0xc00071e160) (0xc000594000) Create stream\nI0107 11:39:59.996145    1641 log.go:172] (0xc00071e160) (0xc000594000) Stream added, broadcasting: 5\nI0107 11:40:00.000459    1641 log.go:172] (0xc00071e160) Reply frame received for 5\nI0107 11:40:00.256176    1641 log.go:172] (0xc00071e160) Data frame received for 3\nI0107 11:40:00.256422    1641 log.go:172] (0xc0005d2280) (3) Data frame handling\nI0107 11:40:00.256449    1641 log.go:172] (0xc0005d2280) (3) Data frame sent\nI0107 11:40:00.438990    1641 log.go:172] (0xc00071e160) Data frame received for 1\nI0107 11:40:00.439293    1641 log.go:172] (0xc00071e160) (0xc0005d2280) Stream removed, broadcasting: 3\nI0107 11:40:00.439527    1641 log.go:172] (0xc0005d21e0) (1) Data frame handling\nI0107 11:40:00.439573    1641 log.go:172] (0xc0005d21e0) (1) Data frame sent\nI0107 11:40:00.439720    1641 log.go:172] (0xc00071e160) (0xc000594000) Stream removed, broadcasting: 5\nI0107 11:40:00.439837    1641 log.go:172] (0xc00071e160) (0xc0005d21e0) Stream removed, broadcasting: 1\nI0107 11:40:00.439898    1641 log.go:172] (0xc00071e160) Go away received\nI0107 11:40:00.441427    1641 log.go:172] (0xc00071e160) (0xc0005d21e0) Stream removed, broadcasting: 1\nI0107 11:40:00.441503    1641 log.go:172] (0xc00071e160) (0xc0005d2280) Stream removed, broadcasting: 3\nI0107 11:40:00.441532    1641 log.go:172] (0xc00071e160) (0xc000594000) Stream removed, broadcasting: 5\n"
Jan  7 11:40:00.471: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  7 11:40:00.471: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  7 11:40:00.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4bb6v ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  7 11:40:01.305: INFO: stderr: "I0107 11:40:00.745729    1663 log.go:172] (0xc0008402c0) (0xc0006fe640) Create stream\nI0107 11:40:00.746093    1663 log.go:172] (0xc0008402c0) (0xc0006fe640) Stream added, broadcasting: 1\nI0107 11:40:00.754336    1663 log.go:172] (0xc0008402c0) Reply frame received for 1\nI0107 11:40:00.754436    1663 log.go:172] (0xc0008402c0) (0xc0005a4c80) Create stream\nI0107 11:40:00.754450    1663 log.go:172] (0xc0008402c0) (0xc0005a4c80) Stream added, broadcasting: 3\nI0107 11:40:00.755414    1663 log.go:172] (0xc0008402c0) Reply frame received for 3\nI0107 11:40:00.755451    1663 log.go:172] (0xc0008402c0) (0xc000660000) Create stream\nI0107 11:40:00.755457    1663 log.go:172] (0xc0008402c0) (0xc000660000) Stream added, broadcasting: 5\nI0107 11:40:00.756497    1663 log.go:172] (0xc0008402c0) Reply frame received for 5\nI0107 11:40:00.998742    1663 log.go:172] (0xc0008402c0) Data frame received for 3\nI0107 11:40:00.998992    1663 log.go:172] (0xc0005a4c80) (3) Data frame handling\nI0107 11:40:00.999052    1663 log.go:172] (0xc0005a4c80) (3) Data frame sent\nI0107 11:40:01.293704    1663 log.go:172] (0xc0008402c0) (0xc0005a4c80) Stream removed, broadcasting: 3\nI0107 11:40:01.293845    1663 log.go:172] (0xc0008402c0) Data frame received for 1\nI0107 11:40:01.293863    1663 log.go:172] (0xc0006fe640) (1) Data frame handling\nI0107 11:40:01.293898    1663 log.go:172] (0xc0006fe640) (1) Data frame sent\nI0107 11:40:01.293909    1663 log.go:172] (0xc0008402c0) (0xc0006fe640) Stream removed, broadcasting: 1\nI0107 11:40:01.293950    1663 log.go:172] (0xc0008402c0) (0xc000660000) Stream removed, broadcasting: 5\nI0107 11:40:01.294026    1663 log.go:172] (0xc0008402c0) Go away received\nI0107 11:40:01.294804    1663 log.go:172] (0xc0008402c0) (0xc0006fe640) Stream removed, broadcasting: 1\nI0107 11:40:01.294822    1663 log.go:172] (0xc0008402c0) (0xc0005a4c80) Stream removed, broadcasting: 3\nI0107 11:40:01.294830    1663 log.go:172] (0xc0008402c0) (0xc000660000) Stream removed, broadcasting: 5\n"
Jan  7 11:40:01.306: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  7 11:40:01.306: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  7 11:40:01.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4bb6v ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  7 11:40:01.750: INFO: stderr: "I0107 11:40:01.460262    1683 log.go:172] (0xc0006f2370) (0xc000712640) Create stream\nI0107 11:40:01.460433    1683 log.go:172] (0xc0006f2370) (0xc000712640) Stream added, broadcasting: 1\nI0107 11:40:01.464396    1683 log.go:172] (0xc0006f2370) Reply frame received for 1\nI0107 11:40:01.464430    1683 log.go:172] (0xc0006f2370) (0xc0007126e0) Create stream\nI0107 11:40:01.464439    1683 log.go:172] (0xc0006f2370) (0xc0007126e0) Stream added, broadcasting: 3\nI0107 11:40:01.465055    1683 log.go:172] (0xc0006f2370) Reply frame received for 3\nI0107 11:40:01.465074    1683 log.go:172] (0xc0006f2370) (0xc000664c80) Create stream\nI0107 11:40:01.465083    1683 log.go:172] (0xc0006f2370) (0xc000664c80) Stream added, broadcasting: 5\nI0107 11:40:01.465625    1683 log.go:172] (0xc0006f2370) Reply frame received for 5\nI0107 11:40:01.578985    1683 log.go:172] (0xc0006f2370) Data frame received for 3\nI0107 11:40:01.579059    1683 log.go:172] (0xc0007126e0) (3) Data frame handling\nI0107 11:40:01.579077    1683 log.go:172] (0xc0007126e0) (3) Data frame sent\nI0107 11:40:01.736397    1683 log.go:172] (0xc0006f2370) (0xc0007126e0) Stream removed, broadcasting: 3\nI0107 11:40:01.736591    1683 log.go:172] (0xc0006f2370) (0xc000664c80) Stream removed, broadcasting: 5\nI0107 11:40:01.736715    1683 log.go:172] (0xc0006f2370) Data frame received for 1\nI0107 11:40:01.736779    1683 log.go:172] (0xc000712640) (1) Data frame handling\nI0107 11:40:01.736799    1683 log.go:172] (0xc000712640) (1) Data frame sent\nI0107 11:40:01.736843    1683 log.go:172] (0xc0006f2370) (0xc000712640) Stream removed, broadcasting: 1\nI0107 11:40:01.736865    1683 log.go:172] (0xc0006f2370) Go away received\nI0107 11:40:01.737470    1683 log.go:172] (0xc0006f2370) (0xc000712640) Stream removed, broadcasting: 1\nI0107 11:40:01.737490    1683 log.go:172] (0xc0006f2370) (0xc0007126e0) Stream removed, broadcasting: 3\nI0107 11:40:01.737502    1683 log.go:172] (0xc0006f2370) (0xc000664c80) Stream removed, broadcasting: 5\n"
Jan  7 11:40:01.750: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  7 11:40:01.750: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  7 11:40:01.750: INFO: Waiting for statefulset status.replicas updated to 0
Jan  7 11:40:01.817: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  7 11:40:01.817: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  7 11:40:01.817: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  7 11:40:01.883: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999989477s
Jan  7 11:40:02.925: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.976121501s
Jan  7 11:40:04.093: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.934149694s
Jan  7 11:40:05.117: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.766036891s
Jan  7 11:40:06.146: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.742415157s
Jan  7 11:40:07.171: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.712567598s
Jan  7 11:40:08.196: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.68765861s
Jan  7 11:40:09.223: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.662970611s
Jan  7 11:40:10.250: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.636440732s
Jan  7 11:40:11.287: INFO: Verifying statefulset ss doesn't scale past 3 for another 609.147511ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-4bb6v
Jan  7 11:40:12.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4bb6v ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  7 11:40:13.101: INFO: stderr: "I0107 11:40:12.583085    1705 log.go:172] (0xc0007e2160) (0xc0005be1e0) Create stream\nI0107 11:40:12.585170    1705 log.go:172] (0xc0007e2160) (0xc0005be1e0) Stream added, broadcasting: 1\nI0107 11:40:12.599107    1705 log.go:172] (0xc0007e2160) Reply frame received for 1\nI0107 11:40:12.599447    1705 log.go:172] (0xc0007e2160) (0xc0006beb40) Create stream\nI0107 11:40:12.599473    1705 log.go:172] (0xc0007e2160) (0xc0006beb40) Stream added, broadcasting: 3\nI0107 11:40:12.602703    1705 log.go:172] (0xc0007e2160) Reply frame received for 3\nI0107 11:40:12.602885    1705 log.go:172] (0xc0007e2160) (0xc0003fc000) Create stream\nI0107 11:40:12.602897    1705 log.go:172] (0xc0007e2160) (0xc0003fc000) Stream added, broadcasting: 5\nI0107 11:40:12.604953    1705 log.go:172] (0xc0007e2160) Reply frame received for 5\nI0107 11:40:12.844479    1705 log.go:172] (0xc0007e2160) Data frame received for 3\nI0107 11:40:12.844624    1705 log.go:172] (0xc0006beb40) (3) Data frame handling\nI0107 11:40:12.844686    1705 log.go:172] (0xc0006beb40) (3) Data frame sent\nI0107 11:40:13.081936    1705 log.go:172] (0xc0007e2160) Data frame received for 1\nI0107 11:40:13.082246    1705 log.go:172] (0xc0007e2160) (0xc0003fc000) Stream removed, broadcasting: 5\nI0107 11:40:13.082367    1705 log.go:172] (0xc0005be1e0) (1) Data frame handling\nI0107 11:40:13.082471    1705 log.go:172] (0xc0005be1e0) (1) Data frame sent\nI0107 11:40:13.082877    1705 log.go:172] (0xc0007e2160) (0xc0006beb40) Stream removed, broadcasting: 3\nI0107 11:40:13.083140    1705 log.go:172] (0xc0007e2160) (0xc0005be1e0) Stream removed, broadcasting: 1\nI0107 11:40:13.083188    1705 log.go:172] (0xc0007e2160) Go away received\nI0107 11:40:13.084639    1705 log.go:172] (0xc0007e2160) (0xc0005be1e0) Stream removed, broadcasting: 1\nI0107 11:40:13.084665    1705 log.go:172] (0xc0007e2160) (0xc0006beb40) Stream removed, broadcasting: 3\nI0107 11:40:13.084680    1705 log.go:172] (0xc0007e2160) (0xc0003fc000) Stream removed, broadcasting: 5\n"
Jan  7 11:40:13.102: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  7 11:40:13.102: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  7 11:40:13.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4bb6v ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  7 11:40:13.592: INFO: stderr: "I0107 11:40:13.319749    1726 log.go:172] (0xc0005c60b0) (0xc0005ee640) Create stream\nI0107 11:40:13.320047    1726 log.go:172] (0xc0005c60b0) (0xc0005ee640) Stream added, broadcasting: 1\nI0107 11:40:13.326985    1726 log.go:172] (0xc0005c60b0) Reply frame received for 1\nI0107 11:40:13.327035    1726 log.go:172] (0xc0005c60b0) (0xc0002dabe0) Create stream\nI0107 11:40:13.327047    1726 log.go:172] (0xc0005c60b0) (0xc0002dabe0) Stream added, broadcasting: 3\nI0107 11:40:13.328221    1726 log.go:172] (0xc0005c60b0) Reply frame received for 3\nI0107 11:40:13.328245    1726 log.go:172] (0xc0005c60b0) (0xc0005ee6e0) Create stream\nI0107 11:40:13.328251    1726 log.go:172] (0xc0005c60b0) (0xc0005ee6e0) Stream added, broadcasting: 5\nI0107 11:40:13.329617    1726 log.go:172] (0xc0005c60b0) Reply frame received for 5\nI0107 11:40:13.466165    1726 log.go:172] (0xc0005c60b0) Data frame received for 3\nI0107 11:40:13.466321    1726 log.go:172] (0xc0002dabe0) (3) Data frame handling\nI0107 11:40:13.466364    1726 log.go:172] (0xc0002dabe0) (3) Data frame sent\nI0107 11:40:13.582312    1726 log.go:172] (0xc0005c60b0) Data frame received for 1\nI0107 11:40:13.582489    1726 log.go:172] (0xc0005c60b0) (0xc0002dabe0) Stream removed, broadcasting: 3\nI0107 11:40:13.582573    1726 log.go:172] (0xc0005ee640) (1) Data frame handling\nI0107 11:40:13.582598    1726 log.go:172] (0xc0005ee640) (1) Data frame sent\nI0107 11:40:13.582658    1726 log.go:172] (0xc0005c60b0) (0xc0005ee6e0) Stream removed, broadcasting: 5\nI0107 11:40:13.582686    1726 log.go:172] (0xc0005c60b0) (0xc0005ee640) Stream removed, broadcasting: 1\nI0107 11:40:13.582702    1726 log.go:172] (0xc0005c60b0) Go away received\nI0107 11:40:13.583243    1726 log.go:172] (0xc0005c60b0) (0xc0005ee640) Stream removed, broadcasting: 1\nI0107 11:40:13.583263    1726 log.go:172] (0xc0005c60b0) (0xc0002dabe0) Stream removed, broadcasting: 3\nI0107 11:40:13.583272    1726 log.go:172] (0xc0005c60b0) (0xc0005ee6e0) Stream removed, broadcasting: 5\n"
Jan  7 11:40:13.592: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  7 11:40:13.592: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  7 11:40:13.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4bb6v ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  7 11:40:14.217: INFO: stderr: "I0107 11:40:13.866014    1747 log.go:172] (0xc0006d8370) (0xc0006f8640) Create stream\nI0107 11:40:13.866340    1747 log.go:172] (0xc0006d8370) (0xc0006f8640) Stream added, broadcasting: 1\nI0107 11:40:13.889952    1747 log.go:172] (0xc0006d8370) Reply frame received for 1\nI0107 11:40:13.890063    1747 log.go:172] (0xc0006d8370) (0xc0005c8be0) Create stream\nI0107 11:40:13.890076    1747 log.go:172] (0xc0006d8370) (0xc0005c8be0) Stream added, broadcasting: 3\nI0107 11:40:13.891586    1747 log.go:172] (0xc0006d8370) Reply frame received for 3\nI0107 11:40:13.891621    1747 log.go:172] (0xc0006d8370) (0xc000432000) Create stream\nI0107 11:40:13.891633    1747 log.go:172] (0xc0006d8370) (0xc000432000) Stream added, broadcasting: 5\nI0107 11:40:13.892910    1747 log.go:172] (0xc0006d8370) Reply frame received for 5\nI0107 11:40:14.030079    1747 log.go:172] (0xc0006d8370) Data frame received for 3\nI0107 11:40:14.030229    1747 log.go:172] (0xc0005c8be0) (3) Data frame handling\nI0107 11:40:14.030262    1747 log.go:172] (0xc0005c8be0) (3) Data frame sent\nI0107 11:40:14.200326    1747 log.go:172] (0xc0006d8370) Data frame received for 1\nI0107 11:40:14.200501    1747 log.go:172] (0xc0006f8640) (1) Data frame handling\nI0107 11:40:14.200539    1747 log.go:172] (0xc0006f8640) (1) Data frame sent\nI0107 11:40:14.200567    1747 log.go:172] (0xc0006d8370) (0xc0006f8640) Stream removed, broadcasting: 1\nI0107 11:40:14.201221    1747 log.go:172] (0xc0006d8370) (0xc0005c8be0) Stream removed, broadcasting: 3\nI0107 11:40:14.201569    1747 log.go:172] (0xc0006d8370) (0xc000432000) Stream removed, broadcasting: 5\nI0107 11:40:14.202249    1747 log.go:172] (0xc0006d8370) (0xc0006f8640) Stream removed, broadcasting: 1\nI0107 11:40:14.202264    1747 log.go:172] (0xc0006d8370) (0xc0005c8be0) Stream removed, broadcasting: 3\nI0107 11:40:14.202290    1747 log.go:172] (0xc0006d8370) (0xc000432000) Stream removed, broadcasting: 5\n"
Jan  7 11:40:14.218: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  7 11:40:14.218: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  7 11:40:14.218: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  7 11:40:34.475: INFO: Deleting all statefulset in ns e2e-tests-statefulset-4bb6v
Jan  7 11:40:34.506: INFO: Scaling statefulset ss to 0
Jan  7 11:40:34.542: INFO: Waiting for statefulset status.replicas updated to 0
Jan  7 11:40:34.565: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:40:34.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-4bb6v" for this suite.
Jan  7 11:40:42.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:40:42.857: INFO: namespace: e2e-tests-statefulset-4bb6v, resource: bindings, ignored listing per whitelist
Jan  7 11:40:42.880: INFO: namespace e2e-tests-statefulset-4bb6v deletion completed in 8.191218658s

• [SLOW TEST:105.348 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:40:42.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan  7 11:40:43.202: INFO: Pod name pod-release: Found 0 pods out of 1
Jan  7 11:40:48.219: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:40:50.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-z76ns" for this suite.
Jan  7 11:40:59.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:40:59.495: INFO: namespace: e2e-tests-replication-controller-z76ns, resource: bindings, ignored listing per whitelist
Jan  7 11:41:00.910: INFO: namespace e2e-tests-replication-controller-z76ns deletion completed in 10.54151606s

• [SLOW TEST:18.029 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:41:00.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  7 11:41:01.223: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  7 11:41:01.236: INFO: Waiting for terminating namespaces to be deleted...
Jan  7 11:41:01.241: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan  7 11:41:01.266: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  7 11:41:01.266: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan  7 11:41:01.266: INFO: 	Container weave ready: true, restart count 0
Jan  7 11:41:01.266: INFO: 	Container weave-npc ready: true, restart count 0
Jan  7 11:41:01.266: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  7 11:41:01.266: INFO: 	Container coredns ready: true, restart count 0
Jan  7 11:41:01.266: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  7 11:41:01.266: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  7 11:41:01.266: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  7 11:41:01.266: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  7 11:41:01.266: INFO: 	Container coredns ready: true, restart count 0
Jan  7 11:41:01.266: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan  7 11:41:01.266: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-9aa46852-3142-11ea-8b51-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-9aa46852-3142-11ea-8b51-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-9aa46852-3142-11ea-8b51-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:41:24.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-5hhlj" for this suite.
Jan  7 11:41:46.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:41:46.432: INFO: namespace: e2e-tests-sched-pred-5hhlj, resource: bindings, ignored listing per whitelist
Jan  7 11:41:46.636: INFO: namespace e2e-tests-sched-pred-5hhlj deletion completed in 22.353943464s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:45.726 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:41:46.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan  7 11:41:46.906: INFO: Waiting up to 5m0s for pod "pod-af5b34f1-3142-11ea-8b51-0242ac110005" in namespace "e2e-tests-emptydir-nxr5l" to be "success or failure"
Jan  7 11:41:46.932: INFO: Pod "pod-af5b34f1-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.94149ms
Jan  7 11:41:48.962: INFO: Pod "pod-af5b34f1-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055858106s
Jan  7 11:41:50.989: INFO: Pod "pod-af5b34f1-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082339176s
Jan  7 11:41:53.286: INFO: Pod "pod-af5b34f1-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.380099258s
Jan  7 11:41:55.300: INFO: Pod "pod-af5b34f1-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.394113669s
Jan  7 11:41:57.323: INFO: Pod "pod-af5b34f1-3142-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.417029007s
STEP: Saw pod success
Jan  7 11:41:57.324: INFO: Pod "pod-af5b34f1-3142-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:41:57.330: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-af5b34f1-3142-11ea-8b51-0242ac110005 container test-container: 
STEP: delete the pod
Jan  7 11:41:57.603: INFO: Waiting for pod pod-af5b34f1-3142-11ea-8b51-0242ac110005 to disappear
Jan  7 11:41:57.613: INFO: Pod pod-af5b34f1-3142-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:41:57.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-nxr5l" for this suite.
Jan  7 11:42:04.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:42:05.099: INFO: namespace: e2e-tests-emptydir-nxr5l, resource: bindings, ignored listing per whitelist
Jan  7 11:42:05.287: INFO: namespace e2e-tests-emptydir-nxr5l deletion completed in 7.663980378s

• [SLOW TEST:18.650 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:42:05.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan  7 11:42:05.581: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-nkzht,SelfLink:/api/v1/namespaces/e2e-tests-watch-nkzht/configmaps/e2e-watch-test-resource-version,UID:ba821c02-3142-11ea-a994-fa163e34d433,ResourceVersion:17470486,Generation:0,CreationTimestamp:2020-01-07 11:42:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  7 11:42:05.582: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-nkzht,SelfLink:/api/v1/namespaces/e2e-tests-watch-nkzht/configmaps/e2e-watch-test-resource-version,UID:ba821c02-3142-11ea-a994-fa163e34d433,ResourceVersion:17470487,Generation:0,CreationTimestamp:2020-01-07 11:42:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:42:05.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-nkzht" for this suite.
Jan  7 11:42:11.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:42:11.794: INFO: namespace: e2e-tests-watch-nkzht, resource: bindings, ignored listing per whitelist
Jan  7 11:42:11.828: INFO: namespace e2e-tests-watch-nkzht deletion completed in 6.239597623s

• [SLOW TEST:6.540 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:42:11.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-be5fe791-3142-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  7 11:42:12.147: INFO: Waiting up to 5m0s for pod "pod-secrets-be74abdd-3142-11ea-8b51-0242ac110005" in namespace "e2e-tests-secrets-2hvjs" to be "success or failure"
Jan  7 11:42:12.178: INFO: Pod "pod-secrets-be74abdd-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.171594ms
Jan  7 11:42:14.205: INFO: Pod "pod-secrets-be74abdd-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057986599s
Jan  7 11:42:16.233: INFO: Pod "pod-secrets-be74abdd-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085963236s
Jan  7 11:42:18.523: INFO: Pod "pod-secrets-be74abdd-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.375645385s
Jan  7 11:42:20.550: INFO: Pod "pod-secrets-be74abdd-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.403124974s
Jan  7 11:42:22.625: INFO: Pod "pod-secrets-be74abdd-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.477358156s
Jan  7 11:42:24.663: INFO: Pod "pod-secrets-be74abdd-3142-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.515605882s
STEP: Saw pod success
Jan  7 11:42:24.663: INFO: Pod "pod-secrets-be74abdd-3142-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:42:24.672: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-be74abdd-3142-11ea-8b51-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  7 11:42:24.797: INFO: Waiting for pod pod-secrets-be74abdd-3142-11ea-8b51-0242ac110005 to disappear
Jan  7 11:42:24.831: INFO: Pod pod-secrets-be74abdd-3142-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:42:24.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2hvjs" for this suite.
Jan  7 11:42:30.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:42:31.111: INFO: namespace: e2e-tests-secrets-2hvjs, resource: bindings, ignored listing per whitelist
Jan  7 11:42:31.122: INFO: namespace e2e-tests-secrets-2hvjs deletion completed in 6.240065129s

• [SLOW TEST:19.293 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:42:31.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  7 11:42:31.299: INFO: Waiting up to 5m0s for pod "downward-api-c9e109cb-3142-11ea-8b51-0242ac110005" in namespace "e2e-tests-downward-api-tqxcx" to be "success or failure"
Jan  7 11:42:31.324: INFO: Pod "downward-api-c9e109cb-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.386442ms
Jan  7 11:42:33.344: INFO: Pod "downward-api-c9e109cb-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045705921s
Jan  7 11:42:35.361: INFO: Pod "downward-api-c9e109cb-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062640761s
Jan  7 11:42:37.808: INFO: Pod "downward-api-c9e109cb-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.509711533s
Jan  7 11:42:40.732: INFO: Pod "downward-api-c9e109cb-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.433273898s
Jan  7 11:42:42.750: INFO: Pod "downward-api-c9e109cb-3142-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.450915513s
STEP: Saw pod success
Jan  7 11:42:42.750: INFO: Pod "downward-api-c9e109cb-3142-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:42:42.755: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-c9e109cb-3142-11ea-8b51-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  7 11:42:43.293: INFO: Waiting for pod downward-api-c9e109cb-3142-11ea-8b51-0242ac110005 to disappear
Jan  7 11:42:43.441: INFO: Pod downward-api-c9e109cb-3142-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:42:43.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-tqxcx" for this suite.
Jan  7 11:42:51.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:42:51.680: INFO: namespace: e2e-tests-downward-api-tqxcx, resource: bindings, ignored listing per whitelist
Jan  7 11:42:51.689: INFO: namespace e2e-tests-downward-api-tqxcx deletion completed in 8.23692815s

• [SLOW TEST:20.567 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:42:51.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  7 11:42:51.868: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:43:08.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-6frhx" for this suite.
Jan  7 11:43:14.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:43:15.004: INFO: namespace: e2e-tests-init-container-6frhx, resource: bindings, ignored listing per whitelist
Jan  7 11:43:15.097: INFO: namespace e2e-tests-init-container-6frhx deletion completed in 6.192273722s

• [SLOW TEST:23.407 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:43:15.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jan  7 11:43:15.293: INFO: Waiting up to 5m0s for pod "client-containers-e4115138-3142-11ea-8b51-0242ac110005" in namespace "e2e-tests-containers-l74xn" to be "success or failure"
Jan  7 11:43:15.305: INFO: Pod "client-containers-e4115138-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.547166ms
Jan  7 11:43:17.319: INFO: Pod "client-containers-e4115138-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025968398s
Jan  7 11:43:19.330: INFO: Pod "client-containers-e4115138-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036704784s
Jan  7 11:43:21.537: INFO: Pod "client-containers-e4115138-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.243673473s
Jan  7 11:43:23.551: INFO: Pod "client-containers-e4115138-3142-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.257708565s
Jan  7 11:43:25.579: INFO: Pod "client-containers-e4115138-3142-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.286250671s
STEP: Saw pod success
Jan  7 11:43:25.580: INFO: Pod "client-containers-e4115138-3142-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:43:25.629: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-e4115138-3142-11ea-8b51-0242ac110005 container test-container: 
STEP: delete the pod
Jan  7 11:43:25.704: INFO: Waiting for pod client-containers-e4115138-3142-11ea-8b51-0242ac110005 to disappear
Jan  7 11:43:25.709: INFO: Pod client-containers-e4115138-3142-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:43:25.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-l74xn" for this suite.
Jan  7 11:43:32.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:43:32.051: INFO: namespace: e2e-tests-containers-l74xn, resource: bindings, ignored listing per whitelist
Jan  7 11:43:32.193: INFO: namespace e2e-tests-containers-l74xn deletion completed in 6.477182914s

• [SLOW TEST:17.096 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:43:32.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-ee539e72-3142-11ea-8b51-0242ac110005
STEP: Creating secret with name s-test-opt-upd-ee53a0ec-3142-11ea-8b51-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-ee539e72-3142-11ea-8b51-0242ac110005
STEP: Updating secret s-test-opt-upd-ee53a0ec-3142-11ea-8b51-0242ac110005
STEP: Creating secret with name s-test-opt-create-ee53a136-3142-11ea-8b51-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:45:09.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-skl88" for this suite.
Jan  7 11:45:49.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:45:49.376: INFO: namespace: e2e-tests-secrets-skl88, resource: bindings, ignored listing per whitelist
Jan  7 11:45:49.511: INFO: namespace e2e-tests-secrets-skl88 deletion completed in 40.280123187s

• [SLOW TEST:137.318 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:45:49.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  7 11:45:49.729: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4027769c-3143-11ea-8b51-0242ac110005" in namespace "e2e-tests-downward-api-2l957" to be "success or failure"
Jan  7 11:45:49.752: INFO: Pod "downwardapi-volume-4027769c-3143-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.249363ms
Jan  7 11:45:51.819: INFO: Pod "downwardapi-volume-4027769c-3143-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089796111s
Jan  7 11:45:53.847: INFO: Pod "downwardapi-volume-4027769c-3143-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117703786s
Jan  7 11:45:56.321: INFO: Pod "downwardapi-volume-4027769c-3143-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.591780775s
Jan  7 11:45:58.771: INFO: Pod "downwardapi-volume-4027769c-3143-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.041760941s
Jan  7 11:46:00.797: INFO: Pod "downwardapi-volume-4027769c-3143-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.068194308s
STEP: Saw pod success
Jan  7 11:46:00.798: INFO: Pod "downwardapi-volume-4027769c-3143-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:46:00.811: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4027769c-3143-11ea-8b51-0242ac110005 container client-container: 
STEP: delete the pod
Jan  7 11:46:00.939: INFO: Waiting for pod downwardapi-volume-4027769c-3143-11ea-8b51-0242ac110005 to disappear
Jan  7 11:46:01.705: INFO: Pod downwardapi-volume-4027769c-3143-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:46:01.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-2l957" for this suite.
Jan  7 11:46:08.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:46:08.609: INFO: namespace: e2e-tests-downward-api-2l957, resource: bindings, ignored listing per whitelist
Jan  7 11:46:08.620: INFO: namespace e2e-tests-downward-api-2l957 deletion completed in 6.899758942s

• [SLOW TEST:19.109 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:46:08.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jan  7 11:46:08.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-m7gkl'
Jan  7 11:46:11.378: INFO: stderr: ""
Jan  7 11:46:11.378: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  7 11:46:11.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-m7gkl'
Jan  7 11:46:11.591: INFO: stderr: ""
Jan  7 11:46:11.591: INFO: stdout: "update-demo-nautilus-lqfh4 update-demo-nautilus-mn64f "
Jan  7 11:46:11.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lqfh4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m7gkl'
Jan  7 11:46:11.780: INFO: stderr: ""
Jan  7 11:46:11.780: INFO: stdout: ""
Jan  7 11:46:11.780: INFO: update-demo-nautilus-lqfh4 is created but not running
Jan  7 11:46:16.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-m7gkl'
Jan  7 11:46:17.022: INFO: stderr: ""
Jan  7 11:46:17.022: INFO: stdout: "update-demo-nautilus-lqfh4 update-demo-nautilus-mn64f "
Jan  7 11:46:17.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lqfh4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m7gkl'
Jan  7 11:46:17.221: INFO: stderr: ""
Jan  7 11:46:17.221: INFO: stdout: ""
Jan  7 11:46:17.221: INFO: update-demo-nautilus-lqfh4 is created but not running
Jan  7 11:46:22.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-m7gkl'
Jan  7 11:46:22.596: INFO: stderr: ""
Jan  7 11:46:22.596: INFO: stdout: "update-demo-nautilus-lqfh4 update-demo-nautilus-mn64f "
Jan  7 11:46:22.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lqfh4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m7gkl'
Jan  7 11:46:22.774: INFO: stderr: ""
Jan  7 11:46:22.774: INFO: stdout: ""
Jan  7 11:46:22.774: INFO: update-demo-nautilus-lqfh4 is created but not running
Jan  7 11:46:27.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-m7gkl'
Jan  7 11:46:27.970: INFO: stderr: ""
Jan  7 11:46:27.971: INFO: stdout: "update-demo-nautilus-lqfh4 update-demo-nautilus-mn64f "
Jan  7 11:46:27.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lqfh4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m7gkl'
Jan  7 11:46:28.096: INFO: stderr: ""
Jan  7 11:46:28.096: INFO: stdout: "true"
Jan  7 11:46:28.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lqfh4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m7gkl'
Jan  7 11:46:28.216: INFO: stderr: ""
Jan  7 11:46:28.216: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  7 11:46:28.216: INFO: validating pod update-demo-nautilus-lqfh4
Jan  7 11:46:28.240: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  7 11:46:28.241: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  7 11:46:28.241: INFO: update-demo-nautilus-lqfh4 is verified up and running
Jan  7 11:46:28.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mn64f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m7gkl'
Jan  7 11:46:28.392: INFO: stderr: ""
Jan  7 11:46:28.392: INFO: stdout: "true"
Jan  7 11:46:28.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mn64f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m7gkl'
Jan  7 11:46:28.656: INFO: stderr: ""
Jan  7 11:46:28.656: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  7 11:46:28.656: INFO: validating pod update-demo-nautilus-mn64f
Jan  7 11:46:28.667: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  7 11:46:28.668: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  7 11:46:28.668: INFO: update-demo-nautilus-mn64f is verified up and running
STEP: rolling-update to new replication controller
Jan  7 11:46:28.670: INFO: scanned /root for discovery docs: 
Jan  7 11:46:28.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-m7gkl'
Jan  7 11:47:04.680: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  7 11:47:04.680: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  7 11:47:04.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-m7gkl'
Jan  7 11:47:04.857: INFO: stderr: ""
Jan  7 11:47:04.857: INFO: stdout: "update-demo-kitten-8949n update-demo-kitten-fsqpj "
Jan  7 11:47:04.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8949n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m7gkl'
Jan  7 11:47:05.024: INFO: stderr: ""
Jan  7 11:47:05.024: INFO: stdout: "true"
Jan  7 11:47:05.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8949n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m7gkl'
Jan  7 11:47:05.120: INFO: stderr: ""
Jan  7 11:47:05.120: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  7 11:47:05.120: INFO: validating pod update-demo-kitten-8949n
Jan  7 11:47:05.159: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  7 11:47:05.159: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  7 11:47:05.159: INFO: update-demo-kitten-8949n is verified up and running
Jan  7 11:47:05.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fsqpj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m7gkl'
Jan  7 11:47:05.292: INFO: stderr: ""
Jan  7 11:47:05.293: INFO: stdout: "true"
Jan  7 11:47:05.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fsqpj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m7gkl'
Jan  7 11:47:05.387: INFO: stderr: ""
Jan  7 11:47:05.388: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  7 11:47:05.388: INFO: validating pod update-demo-kitten-fsqpj
Jan  7 11:47:05.396: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  7 11:47:05.396: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  7 11:47:05.396: INFO: update-demo-kitten-fsqpj is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:47:05.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-m7gkl" for this suite.
Jan  7 11:47:31.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:47:31.470: INFO: namespace: e2e-tests-kubectl-m7gkl, resource: bindings, ignored listing per whitelist
Jan  7 11:47:31.613: INFO: namespace e2e-tests-kubectl-m7gkl deletion completed in 26.212742009s

• [SLOW TEST:82.993 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:47:31.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  7 11:47:42.332: INFO: Waiting up to 5m0s for pod "client-envvars-833fef80-3143-11ea-8b51-0242ac110005" in namespace "e2e-tests-pods-cj69n" to be "success or failure"
Jan  7 11:47:42.557: INFO: Pod "client-envvars-833fef80-3143-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 224.667887ms
Jan  7 11:47:44.590: INFO: Pod "client-envvars-833fef80-3143-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257338316s
Jan  7 11:47:46.605: INFO: Pod "client-envvars-833fef80-3143-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.27184305s
Jan  7 11:47:48.916: INFO: Pod "client-envvars-833fef80-3143-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.583147945s
Jan  7 11:47:50.942: INFO: Pod "client-envvars-833fef80-3143-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.608798829s
Jan  7 11:47:52.965: INFO: Pod "client-envvars-833fef80-3143-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.63267698s
STEP: Saw pod success
Jan  7 11:47:52.966: INFO: Pod "client-envvars-833fef80-3143-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:47:52.981: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-833fef80-3143-11ea-8b51-0242ac110005 container env3cont: 
STEP: delete the pod
Jan  7 11:47:53.129: INFO: Waiting for pod client-envvars-833fef80-3143-11ea-8b51-0242ac110005 to disappear
Jan  7 11:47:53.175: INFO: Pod client-envvars-833fef80-3143-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:47:53.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-cj69n" for this suite.
Jan  7 11:48:47.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:48:47.407: INFO: namespace: e2e-tests-pods-cj69n, resource: bindings, ignored listing per whitelist
Jan  7 11:48:47.548: INFO: namespace e2e-tests-pods-cj69n deletion completed in 54.30090295s

• [SLOW TEST:75.934 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:48:47.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jan  7 11:48:47.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-wfmb7 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan  7 11:49:00.171: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0107 11:48:58.777928    2156 log.go:172] (0xc000930160) (0xc000836140) Create stream\nI0107 11:48:58.778856    2156 log.go:172] (0xc000930160) (0xc000836140) Stream added, broadcasting: 1\nI0107 11:48:58.840391    2156 log.go:172] (0xc000930160) Reply frame received for 1\nI0107 11:48:58.840775    2156 log.go:172] (0xc000930160) (0xc00065c5a0) Create stream\nI0107 11:48:58.840854    2156 log.go:172] (0xc000930160) (0xc00065c5a0) Stream added, broadcasting: 3\nI0107 11:48:58.844174    2156 log.go:172] (0xc000930160) Reply frame received for 3\nI0107 11:48:58.844227    2156 log.go:172] (0xc000930160) (0xc0008361e0) Create stream\nI0107 11:48:58.844236    2156 log.go:172] (0xc000930160) (0xc0008361e0) Stream added, broadcasting: 5\nI0107 11:48:58.845966    2156 log.go:172] (0xc000930160) Reply frame received for 5\nI0107 11:48:58.846016    2156 log.go:172] (0xc000930160) (0xc000702d20) Create stream\nI0107 11:48:58.846030    2156 log.go:172] (0xc000930160) (0xc000702d20) Stream added, broadcasting: 7\nI0107 11:48:58.848835    2156 log.go:172] (0xc000930160) Reply frame received for 7\nI0107 11:48:58.849521    2156 log.go:172] (0xc00065c5a0) (3) Writing data frame\nI0107 11:48:58.849881    2156 log.go:172] (0xc00065c5a0) (3) Writing data frame\nI0107 11:48:58.883079    2156 log.go:172] (0xc000930160) Data frame received for 5\nI0107 11:48:58.883179    2156 log.go:172] (0xc0008361e0) (5) Data frame handling\nI0107 11:48:58.883251    2156 log.go:172] (0xc0008361e0) (5) Data frame sent\nI0107 11:48:58.886160    2156 log.go:172] (0xc000930160) Data frame received for 5\nI0107 11:48:58.886170    2156 log.go:172] (0xc0008361e0) (5) Data frame handling\nI0107 11:48:58.886181    2156 log.go:172] (0xc0008361e0) (5) Data frame sent\nI0107 11:49:00.084586    2156 log.go:172] (0xc000930160) Data frame received for 1\nI0107 11:49:00.085133    2156 log.go:172] (0xc000930160) (0xc0008361e0) Stream removed, broadcasting: 5\nI0107 11:49:00.085335    2156 log.go:172] (0xc000836140) (1) Data frame handling\nI0107 11:49:00.085406    2156 log.go:172] (0xc000836140) (1) Data frame sent\nI0107 11:49:00.085546    2156 log.go:172] (0xc000930160) (0xc00065c5a0) Stream removed, broadcasting: 3\nI0107 11:49:00.085839    2156 log.go:172] (0xc000930160) (0xc000836140) Stream removed, broadcasting: 1\nI0107 11:49:00.086077    2156 log.go:172] (0xc000930160) (0xc000702d20) Stream removed, broadcasting: 7\nI0107 11:49:00.086177    2156 log.go:172] (0xc000930160) Go away received\nI0107 11:49:00.086818    2156 log.go:172] (0xc000930160) (0xc000836140) Stream removed, broadcasting: 1\nI0107 11:49:00.086856    2156 log.go:172] (0xc000930160) (0xc00065c5a0) Stream removed, broadcasting: 3\nI0107 11:49:00.086875    2156 log.go:172] (0xc000930160) (0xc0008361e0) Stream removed, broadcasting: 5\nI0107 11:49:00.086893    2156 log.go:172] (0xc000930160) (0xc000702d20) Stream removed, broadcasting: 7\n"
Jan  7 11:49:00.172: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:49:02.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wfmb7" for this suite.
Jan  7 11:49:08.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:49:08.803: INFO: namespace: e2e-tests-kubectl-wfmb7, resource: bindings, ignored listing per whitelist
Jan  7 11:49:08.899: INFO: namespace e2e-tests-kubectl-wfmb7 deletion completed in 6.275886399s

• [SLOW TEST:21.352 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:49:08.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  7 11:49:09.190: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  7 11:49:09.206: INFO: Waiting for terminating namespaces to be deleted...
Jan  7 11:49:09.211: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan  7 11:49:09.228: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  7 11:49:09.228: INFO: 	Container coredns ready: true, restart count 0
Jan  7 11:49:09.228: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  7 11:49:09.228: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  7 11:49:09.228: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  7 11:49:09.228: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  7 11:49:09.228: INFO: 	Container coredns ready: true, restart count 0
Jan  7 11:49:09.228: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan  7 11:49:09.228: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  7 11:49:09.228: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  7 11:49:09.228: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan  7 11:49:09.228: INFO: 	Container weave ready: true, restart count 0
Jan  7 11:49:09.228: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Jan  7 11:49:09.364: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan  7 11:49:09.364: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan  7 11:49:09.364: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan  7 11:49:09.364: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Jan  7 11:49:09.364: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Jan  7 11:49:09.364: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan  7 11:49:09.364: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan  7 11:49:09.365: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b728247d-3143-11ea-8b51-0242ac110005.15e79863efbacba2], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-zh8s6/filler-pod-b728247d-3143-11ea-8b51-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b728247d-3143-11ea-8b51-0242ac110005.15e798652db73bc4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b728247d-3143-11ea-8b51-0242ac110005.15e79865c473e70a], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b728247d-3143-11ea-8b51-0242ac110005.15e79865ee328aaf], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e7986645453610], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:49:20.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-zh8s6" for this suite.
Jan  7 11:49:27.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:49:28.681: INFO: namespace: e2e-tests-sched-pred-zh8s6, resource: bindings, ignored listing per whitelist
Jan  7 11:49:28.739: INFO: namespace e2e-tests-sched-pred-zh8s6 deletion completed in 8.048204019s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:19.839 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:49:28.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan  7 11:49:29.148: INFO: Waiting up to 5m0s for pod "pod-c2e33d1c-3143-11ea-8b51-0242ac110005" in namespace "e2e-tests-emptydir-lnmfm" to be "success or failure"
Jan  7 11:49:29.194: INFO: Pod "pod-c2e33d1c-3143-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.78432ms
Jan  7 11:49:31.208: INFO: Pod "pod-c2e33d1c-3143-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059701849s
Jan  7 11:49:33.216: INFO: Pod "pod-c2e33d1c-3143-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068122118s
Jan  7 11:49:35.229: INFO: Pod "pod-c2e33d1c-3143-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081245156s
Jan  7 11:49:37.243: INFO: Pod "pod-c2e33d1c-3143-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095438743s
Jan  7 11:49:39.257: INFO: Pod "pod-c2e33d1c-3143-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.10877366s
Jan  7 11:49:41.284: INFO: Pod "pod-c2e33d1c-3143-11ea-8b51-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 12.135657682s
Jan  7 11:49:43.696: INFO: Pod "pod-c2e33d1c-3143-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.547781485s
STEP: Saw pod success
Jan  7 11:49:43.696: INFO: Pod "pod-c2e33d1c-3143-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:49:43.708: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c2e33d1c-3143-11ea-8b51-0242ac110005 container test-container: 
STEP: delete the pod
Jan  7 11:49:43.967: INFO: Waiting for pod pod-c2e33d1c-3143-11ea-8b51-0242ac110005 to disappear
Jan  7 11:49:43.980: INFO: Pod pod-c2e33d1c-3143-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:49:43.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-lnmfm" for this suite.
Jan  7 11:49:50.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:49:50.119: INFO: namespace: e2e-tests-emptydir-lnmfm, resource: bindings, ignored listing per whitelist
Jan  7 11:49:50.183: INFO: namespace e2e-tests-emptydir-lnmfm deletion completed in 6.18948911s

• [SLOW TEST:21.443 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:49:50.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:50:02.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-j7mkq" for this suite.
Jan  7 11:50:09.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:50:09.313: INFO: namespace: e2e-tests-kubelet-test-j7mkq, resource: bindings, ignored listing per whitelist
Jan  7 11:50:09.321: INFO: namespace e2e-tests-kubelet-test-j7mkq deletion completed in 6.70036672s

• [SLOW TEST:19.137 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:50:09.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:50:19.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-tdgcz" for this suite.
Jan  7 11:51:03.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:51:03.881: INFO: namespace: e2e-tests-kubelet-test-tdgcz, resource: bindings, ignored listing per whitelist
Jan  7 11:51:03.897: INFO: namespace e2e-tests-kubelet-test-tdgcz deletion completed in 44.21273616s

• [SLOW TEST:54.576 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:51:03.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  7 11:51:14.830: INFO: Successfully updated pod "labelsupdatefb959bdd-3143-11ea-8b51-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:51:16.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zl8dd" for this suite.
Jan  7 11:51:40.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:51:41.226: INFO: namespace: e2e-tests-projected-zl8dd, resource: bindings, ignored listing per whitelist
Jan  7 11:51:41.252: INFO: namespace e2e-tests-projected-zl8dd deletion completed in 24.300798033s

• [SLOW TEST:37.354 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:51:41.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  7 11:51:52.056: INFO: Successfully updated pod "pod-update-11c72824-3144-11ea-8b51-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Jan  7 11:51:52.077: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:51:52.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-9hpmf" for this suite.
Jan  7 11:52:16.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:52:16.288: INFO: namespace: e2e-tests-pods-9hpmf, resource: bindings, ignored listing per whitelist
Jan  7 11:52:16.322: INFO: namespace e2e-tests-pods-9hpmf deletion completed in 24.239203116s

• [SLOW TEST:35.069 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:52:16.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  7 11:52:16.436: INFO: Creating ReplicaSet my-hostname-basic-26a8e967-3144-11ea-8b51-0242ac110005
Jan  7 11:52:16.522: INFO: Pod name my-hostname-basic-26a8e967-3144-11ea-8b51-0242ac110005: Found 0 pods out of 1
Jan  7 11:52:21.549: INFO: Pod name my-hostname-basic-26a8e967-3144-11ea-8b51-0242ac110005: Found 1 pods out of 1
Jan  7 11:52:21.549: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-26a8e967-3144-11ea-8b51-0242ac110005" is running
Jan  7 11:52:27.573: INFO: Pod "my-hostname-basic-26a8e967-3144-11ea-8b51-0242ac110005-qzxxw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-07 11:52:16 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-07 11:52:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-26a8e967-3144-11ea-8b51-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-07 11:52:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-26a8e967-3144-11ea-8b51-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-07 11:52:16 +0000 UTC Reason: Message:}])
Jan  7 11:52:27.573: INFO: Trying to dial the pod
Jan  7 11:52:32.642: INFO: Controller my-hostname-basic-26a8e967-3144-11ea-8b51-0242ac110005: Got expected result from replica 1 [my-hostname-basic-26a8e967-3144-11ea-8b51-0242ac110005-qzxxw]: "my-hostname-basic-26a8e967-3144-11ea-8b51-0242ac110005-qzxxw", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:52:32.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-j49ng" for this suite.
Jan  7 11:52:40.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:52:40.887: INFO: namespace: e2e-tests-replicaset-j49ng, resource: bindings, ignored listing per whitelist
Jan  7 11:52:40.923: INFO: namespace e2e-tests-replicaset-j49ng deletion completed in 8.272320596s

• [SLOW TEST:24.600 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:52:40.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-35f6bd01-3144-11ea-8b51-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-35f6c2a1-3144-11ea-8b51-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-35f6bd01-3144-11ea-8b51-0242ac110005
STEP: Updating configmap cm-test-opt-upd-35f6c2a1-3144-11ea-8b51-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-35f6c49b-3144-11ea-8b51-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:54:27.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4x869" for this suite.
Jan  7 11:54:53.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:54:53.375: INFO: namespace: e2e-tests-projected-4x869, resource: bindings, ignored listing per whitelist
Jan  7 11:54:53.531: INFO: namespace e2e-tests-projected-4x869 deletion completed in 26.252852248s

• [SLOW TEST:132.608 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:54:53.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan  7 11:54:54.002: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7rgml,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rgml/configmaps/e2e-watch-test-configmap-a,UID:8492711b-3144-11ea-a994-fa163e34d433,ResourceVersion:17472012,Generation:0,CreationTimestamp:2020-01-07 11:54:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  7 11:54:54.003: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7rgml,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rgml/configmaps/e2e-watch-test-configmap-a,UID:8492711b-3144-11ea-a994-fa163e34d433,ResourceVersion:17472012,Generation:0,CreationTimestamp:2020-01-07 11:54:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan  7 11:55:04.077: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7rgml,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rgml/configmaps/e2e-watch-test-configmap-a,UID:8492711b-3144-11ea-a994-fa163e34d433,ResourceVersion:17472025,Generation:0,CreationTimestamp:2020-01-07 11:54:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  7 11:55:04.078: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7rgml,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rgml/configmaps/e2e-watch-test-configmap-a,UID:8492711b-3144-11ea-a994-fa163e34d433,ResourceVersion:17472025,Generation:0,CreationTimestamp:2020-01-07 11:54:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan  7 11:55:14.098: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7rgml,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rgml/configmaps/e2e-watch-test-configmap-a,UID:8492711b-3144-11ea-a994-fa163e34d433,ResourceVersion:17472037,Generation:0,CreationTimestamp:2020-01-07 11:54:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  7 11:55:14.099: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7rgml,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rgml/configmaps/e2e-watch-test-configmap-a,UID:8492711b-3144-11ea-a994-fa163e34d433,ResourceVersion:17472037,Generation:0,CreationTimestamp:2020-01-07 11:54:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan  7 11:55:24.126: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7rgml,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rgml/configmaps/e2e-watch-test-configmap-a,UID:8492711b-3144-11ea-a994-fa163e34d433,ResourceVersion:17472050,Generation:0,CreationTimestamp:2020-01-07 11:54:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  7 11:55:24.126: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7rgml,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rgml/configmaps/e2e-watch-test-configmap-a,UID:8492711b-3144-11ea-a994-fa163e34d433,ResourceVersion:17472050,Generation:0,CreationTimestamp:2020-01-07 11:54:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan  7 11:55:34.150: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7rgml,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rgml/configmaps/e2e-watch-test-configmap-b,UID:9c7f4dda-3144-11ea-a994-fa163e34d433,ResourceVersion:17472063,Generation:0,CreationTimestamp:2020-01-07 11:55:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  7 11:55:34.150: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7rgml,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rgml/configmaps/e2e-watch-test-configmap-b,UID:9c7f4dda-3144-11ea-a994-fa163e34d433,ResourceVersion:17472063,Generation:0,CreationTimestamp:2020-01-07 11:55:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan  7 11:55:44.183: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7rgml,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rgml/configmaps/e2e-watch-test-configmap-b,UID:9c7f4dda-3144-11ea-a994-fa163e34d433,ResourceVersion:17472076,Generation:0,CreationTimestamp:2020-01-07 11:55:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  7 11:55:44.183: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7rgml,SelfLink:/api/v1/namespaces/e2e-tests-watch-7rgml/configmaps/e2e-watch-test-configmap-b,UID:9c7f4dda-3144-11ea-a994-fa163e34d433,ResourceVersion:17472076,Generation:0,CreationTimestamp:2020-01-07 11:55:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:55:54.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-7rgml" for this suite.
Jan  7 11:56:00.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:56:00.380: INFO: namespace: e2e-tests-watch-7rgml, resource: bindings, ignored listing per whitelist
Jan  7 11:56:00.538: INFO: namespace e2e-tests-watch-7rgml deletion completed in 6.337232108s

• [SLOW TEST:67.007 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:56:00.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  7 11:56:00.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-6pbvv'
Jan  7 11:56:00.972: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  7 11:56:00.972: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jan  7 11:56:03.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-6pbvv'
Jan  7 11:56:03.267: INFO: stderr: ""
Jan  7 11:56:03.267: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:56:03.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6pbvv" for this suite.
Jan  7 11:56:09.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:56:09.950: INFO: namespace: e2e-tests-kubectl-6pbvv, resource: bindings, ignored listing per whitelist
Jan  7 11:56:10.154: INFO: namespace e2e-tests-kubectl-6pbvv deletion completed in 6.870962612s

• [SLOW TEST:9.615 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:56:10.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  7 11:56:10.971: INFO: Number of nodes with available pods: 0
Jan  7 11:56:10.971: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:11.998: INFO: Number of nodes with available pods: 0
Jan  7 11:56:11.999: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:13.008: INFO: Number of nodes with available pods: 0
Jan  7 11:56:13.008: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:14.012: INFO: Number of nodes with available pods: 0
Jan  7 11:56:14.013: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:14.989: INFO: Number of nodes with available pods: 0
Jan  7 11:56:14.989: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:15.987: INFO: Number of nodes with available pods: 0
Jan  7 11:56:15.987: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:17.112: INFO: Number of nodes with available pods: 0
Jan  7 11:56:17.112: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:17.989: INFO: Number of nodes with available pods: 0
Jan  7 11:56:17.989: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:19.005: INFO: Number of nodes with available pods: 0
Jan  7 11:56:19.005: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:20.000: INFO: Number of nodes with available pods: 0
Jan  7 11:56:20.000: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:21.051: INFO: Number of nodes with available pods: 0
Jan  7 11:56:21.051: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:21.990: INFO: Number of nodes with available pods: 1
Jan  7 11:56:21.990: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan  7 11:56:22.226: INFO: Number of nodes with available pods: 0
Jan  7 11:56:22.227: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:23.260: INFO: Number of nodes with available pods: 0
Jan  7 11:56:23.260: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:24.268: INFO: Number of nodes with available pods: 0
Jan  7 11:56:24.268: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:25.297: INFO: Number of nodes with available pods: 0
Jan  7 11:56:25.298: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:26.811: INFO: Number of nodes with available pods: 0
Jan  7 11:56:26.811: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:27.294: INFO: Number of nodes with available pods: 0
Jan  7 11:56:27.294: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:29.067: INFO: Number of nodes with available pods: 0
Jan  7 11:56:29.067: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:29.633: INFO: Number of nodes with available pods: 0
Jan  7 11:56:29.634: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:30.276: INFO: Number of nodes with available pods: 0
Jan  7 11:56:30.276: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:31.396: INFO: Number of nodes with available pods: 0
Jan  7 11:56:31.396: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:32.250: INFO: Number of nodes with available pods: 0
Jan  7 11:56:32.250: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:33.255: INFO: Number of nodes with available pods: 0
Jan  7 11:56:33.255: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 11:56:34.268: INFO: Number of nodes with available pods: 1
Jan  7 11:56:34.268: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-5vzz2, will wait for the garbage collector to delete the pods
Jan  7 11:56:34.354: INFO: Deleting DaemonSet.extensions daemon-set took: 18.622644ms
Jan  7 11:56:34.454: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.70261ms
Jan  7 11:56:41.210: INFO: Number of nodes with available pods: 0
Jan  7 11:56:41.210: INFO: Number of running nodes: 0, number of available pods: 0
Jan  7 11:56:41.219: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-5vzz2/daemonsets","resourceVersion":"17472222"},"items":null}

Jan  7 11:56:41.223: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-5vzz2/pods","resourceVersion":"17472222"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:56:41.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-5vzz2" for this suite.
Jan  7 11:56:49.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:56:49.405: INFO: namespace: e2e-tests-daemonsets-5vzz2, resource: bindings, ignored listing per whitelist
Jan  7 11:56:49.499: INFO: namespace e2e-tests-daemonsets-5vzz2 deletion completed in 8.249205442s

• [SLOW TEST:39.345 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:56:49.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-gwf8b/configmap-test-c98e0f73-3144-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  7 11:56:49.756: INFO: Waiting up to 5m0s for pod "pod-configmaps-c98f2cde-3144-11ea-8b51-0242ac110005" in namespace "e2e-tests-configmap-gwf8b" to be "success or failure"
Jan  7 11:56:49.778: INFO: Pod "pod-configmaps-c98f2cde-3144-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.437662ms
Jan  7 11:56:52.035: INFO: Pod "pod-configmaps-c98f2cde-3144-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.278065515s
Jan  7 11:56:54.052: INFO: Pod "pod-configmaps-c98f2cde-3144-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.295544942s
Jan  7 11:56:56.180: INFO: Pod "pod-configmaps-c98f2cde-3144-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.423184147s
Jan  7 11:56:58.250: INFO: Pod "pod-configmaps-c98f2cde-3144-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.493901674s
Jan  7 11:57:00.634: INFO: Pod "pod-configmaps-c98f2cde-3144-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.877527247s
STEP: Saw pod success
Jan  7 11:57:00.634: INFO: Pod "pod-configmaps-c98f2cde-3144-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:57:00.661: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c98f2cde-3144-11ea-8b51-0242ac110005 container env-test: 
STEP: delete the pod
Jan  7 11:57:00.940: INFO: Waiting for pod pod-configmaps-c98f2cde-3144-11ea-8b51-0242ac110005 to disappear
Jan  7 11:57:00.967: INFO: Pod pod-configmaps-c98f2cde-3144-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:57:00.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-gwf8b" for this suite.
Jan  7 11:57:07.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:57:07.221: INFO: namespace: e2e-tests-configmap-gwf8b, resource: bindings, ignored listing per whitelist
Jan  7 11:57:07.345: INFO: namespace e2e-tests-configmap-gwf8b deletion completed in 6.320945799s

• [SLOW TEST:17.845 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:57:07.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jan  7 11:57:07.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan  7 11:57:09.474: INFO: stderr: ""
Jan  7 11:57:09.474: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:57:09.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ndk8g" for this suite.
Jan  7 11:57:15.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:57:15.683: INFO: namespace: e2e-tests-kubectl-ndk8g, resource: bindings, ignored listing per whitelist
Jan  7 11:57:15.696: INFO: namespace e2e-tests-kubectl-ndk8g deletion completed in 6.213629247s

• [SLOW TEST:8.351 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:57:15.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-d9271341-3144-11ea-8b51-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-d92714b4-3144-11ea-8b51-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-d9271341-3144-11ea-8b51-0242ac110005
STEP: Updating configmap cm-test-opt-upd-d92714b4-3144-11ea-8b51-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-d92714ea-3144-11ea-8b51-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:58:34.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jxrsh" for this suite.
Jan  7 11:58:59.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:58:59.248: INFO: namespace: e2e-tests-configmap-jxrsh, resource: bindings, ignored listing per whitelist
Jan  7 11:58:59.258: INFO: namespace e2e-tests-configmap-jxrsh deletion completed in 24.3045598s

• [SLOW TEST:103.562 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:58:59.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  7 11:58:59.561: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16d79839-3145-11ea-8b51-0242ac110005" in namespace "e2e-tests-downward-api-nt45j" to be "success or failure"
Jan  7 11:58:59.577: INFO: Pod "downwardapi-volume-16d79839-3145-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.758344ms
Jan  7 11:59:01.590: INFO: Pod "downwardapi-volume-16d79839-3145-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027988374s
Jan  7 11:59:03.622: INFO: Pod "downwardapi-volume-16d79839-3145-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060680907s
Jan  7 11:59:05.911: INFO: Pod "downwardapi-volume-16d79839-3145-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.349143495s
Jan  7 11:59:08.113: INFO: Pod "downwardapi-volume-16d79839-3145-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550817453s
Jan  7 11:59:10.132: INFO: Pod "downwardapi-volume-16d79839-3145-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.570160293s
Jan  7 11:59:12.146: INFO: Pod "downwardapi-volume-16d79839-3145-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.584200283s
STEP: Saw pod success
Jan  7 11:59:12.146: INFO: Pod "downwardapi-volume-16d79839-3145-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 11:59:12.153: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-16d79839-3145-11ea-8b51-0242ac110005 container client-container: 
STEP: delete the pod
Jan  7 11:59:12.335: INFO: Waiting for pod downwardapi-volume-16d79839-3145-11ea-8b51-0242ac110005 to disappear
Jan  7 11:59:12.355: INFO: Pod downwardapi-volume-16d79839-3145-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:59:12.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nt45j" for this suite.
Jan  7 11:59:18.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 11:59:18.511: INFO: namespace: e2e-tests-downward-api-nt45j, resource: bindings, ignored listing per whitelist
Jan  7 11:59:18.766: INFO: namespace e2e-tests-downward-api-nt45j deletion completed in 6.4005443s

• [SLOW TEST:19.508 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 11:59:18.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 11:59:29.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-sp4n6" for this suite.
Jan  7 12:00:11.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:00:11.676: INFO: namespace: e2e-tests-kubelet-test-sp4n6, resource: bindings, ignored listing per whitelist
Jan  7 12:00:11.800: INFO: namespace e2e-tests-kubelet-test-sp4n6 deletion completed in 42.234948659s

• [SLOW TEST:53.033 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:00:11.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  7 12:00:12.031: INFO: Creating deployment "nginx-deployment"
Jan  7 12:00:12.040: INFO: Waiting for observed generation 1
Jan  7 12:00:14.461: INFO: Waiting for all required pods to come up
Jan  7 12:00:14.899: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan  7 12:00:58.083: INFO: Waiting for deployment "nginx-deployment" to complete
Jan  7 12:00:58.108: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan  7 12:00:58.164: INFO: Updating deployment nginx-deployment
Jan  7 12:00:58.164: INFO: Waiting for observed generation 2
Jan  7 12:01:01.166: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan  7 12:01:01.180: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan  7 12:01:01.190: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  7 12:01:03.421: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan  7 12:01:03.422: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan  7 12:01:03.488: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  7 12:01:03.505: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan  7 12:01:03.506: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan  7 12:01:03.673: INFO: Updating deployment nginx-deployment
Jan  7 12:01:03.673: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan  7 12:01:03.710: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan  7 12:01:05.851: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  7 12:01:06.332: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-r9dxb/deployments/nginx-deployment,UID:42236cf1-3145-11ea-a994-fa163e34d433,ResourceVersion:17472846,Generation:3,CreationTimestamp:2020-01-07 12:00:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-07 12:00:59 +0000 UTC 2020-01-07 12:00:12 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-01-07 12:01:04 +0000 UTC 2020-01-07 12:01:04 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan  7 12:01:06.763: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-r9dxb/replicasets/nginx-deployment-5c98f8fb5,UID:5da557bc-3145-11ea-a994-fa163e34d433,ResourceVersion:17472840,Generation:3,CreationTimestamp:2020-01-07 12:00:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 42236cf1-3145-11ea-a994-fa163e34d433 0xc0002f3327 0xc0002f3328}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  7 12:01:06.764: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan  7 12:01:06.764: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-r9dxb/replicasets/nginx-deployment-85ddf47c5d,UID:422d1974-3145-11ea-a994-fa163e34d433,ResourceVersion:17472838,Generation:3,CreationTimestamp:2020-01-07 12:00:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 42236cf1-3145-11ea-a994-fa163e34d433 0xc0002f3a87 0xc0002f3a88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan  7 12:01:07.000: INFO: Pod "nginx-deployment-5c98f8fb5-28c7n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-28c7n,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-5c98f8fb5-28c7n,UID:5dd1bf03-3145-11ea-a994-fa163e34d433,ResourceVersion:17472832,Generation:0,CreationTimestamp:2020-01-07 12:00:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5da557bc-3145-11ea-a994-fa163e34d433 0xc001b9a537 0xc001b9a538}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b9b500} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b9b520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-07 12:00:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.001: INFO: Pod "nginx-deployment-5c98f8fb5-29f6r" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-29f6r,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-5c98f8fb5-29f6r,UID:626fc4d4-3145-11ea-a994-fa163e34d433,ResourceVersion:17472866,Generation:0,CreationTimestamp:2020-01-07 12:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5da557bc-3145-11ea-a994-fa163e34d433 0xc00212a247 0xc00212a248}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00212a940} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00212a960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.001: INFO: Pod "nginx-deployment-5c98f8fb5-8jshv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8jshv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-5c98f8fb5-8jshv,UID:62862421-3145-11ea-a994-fa163e34d433,ResourceVersion:17472888,Generation:0,CreationTimestamp:2020-01-07 12:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5da557bc-3145-11ea-a994-fa163e34d433 0xc00212ad47 0xc00212ad48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00212b070} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00212b090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.002: INFO: Pod "nginx-deployment-5c98f8fb5-bgxrl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bgxrl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-5c98f8fb5-bgxrl,UID:6286c6da-3145-11ea-a994-fa163e34d433,ResourceVersion:17472885,Generation:0,CreationTimestamp:2020-01-07 12:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5da557bc-3145-11ea-a994-fa163e34d433 0xc00212b427 0xc00212b428}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00212b600} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00212b620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.002: INFO: Pod "nginx-deployment-5c98f8fb5-dz62c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dz62c,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-5c98f8fb5-dz62c,UID:62863838-3145-11ea-a994-fa163e34d433,ResourceVersion:17472882,Generation:0,CreationTimestamp:2020-01-07 12:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5da557bc-3145-11ea-a994-fa163e34d433 0xc00212b937 0xc00212b938}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00212bc40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00212bcc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.002: INFO: Pod "nginx-deployment-5c98f8fb5-gcqbf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gcqbf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-5c98f8fb5-gcqbf,UID:5dcf6b25-3145-11ea-a994-fa163e34d433,ResourceVersion:17472829,Generation:0,CreationTimestamp:2020-01-07 12:00:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5da557bc-3145-11ea-a994-fa163e34d433 0xc0022323a7 0xc0022323a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002232450} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002232470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-07 12:00:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.003: INFO: Pod "nginx-deployment-5c98f8fb5-js9b5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-js9b5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-5c98f8fb5-js9b5,UID:6286fdfa-3145-11ea-a994-fa163e34d433,ResourceVersion:17472883,Generation:0,CreationTimestamp:2020-01-07 12:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5da557bc-3145-11ea-a994-fa163e34d433 0xc002232a87 0xc002232a88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002233640} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002233660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.003: INFO: Pod "nginx-deployment-5c98f8fb5-mx22k" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mx22k,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-5c98f8fb5-mx22k,UID:5dca79b8-3145-11ea-a994-fa163e34d433,ResourceVersion:17472814,Generation:0,CreationTimestamp:2020-01-07 12:00:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5da557bc-3145-11ea-a994-fa163e34d433 0xc002233dd7 0xc002233dd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002238120} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002238140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-07 12:00:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.003: INFO: Pod "nginx-deployment-5c98f8fb5-n2p8t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n2p8t,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-5c98f8fb5-n2p8t,UID:626fbf8b-3145-11ea-a994-fa163e34d433,ResourceVersion:17472868,Generation:0,CreationTimestamp:2020-01-07 12:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5da557bc-3145-11ea-a994-fa163e34d433 0xc002238207 0xc002238208}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002238270} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002238290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.003: INFO: Pod "nginx-deployment-5c98f8fb5-plg5c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-plg5c,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-5c98f8fb5-plg5c,UID:62ab6dda-3145-11ea-a994-fa163e34d433,ResourceVersion:17472901,Generation:0,CreationTimestamp:2020-01-07 12:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5da557bc-3145-11ea-a994-fa163e34d433 0xc0022384a7 0xc0022384a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002238520} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002238540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.003: INFO: Pod "nginx-deployment-5c98f8fb5-rp4hl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rp4hl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-5c98f8fb5-rp4hl,UID:623721a9-3145-11ea-a994-fa163e34d433,ResourceVersion:17472857,Generation:0,CreationTimestamp:2020-01-07 12:01:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5da557bc-3145-11ea-a994-fa163e34d433 0xc0022385e7 0xc0022385e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002238650} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002238670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.004: INFO: Pod "nginx-deployment-5c98f8fb5-vhdzj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vhdzj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-5c98f8fb5-vhdzj,UID:5e693e17-3145-11ea-a994-fa163e34d433,ResourceVersion:17472835,Generation:0,CreationTimestamp:2020-01-07 12:00:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5da557bc-3145-11ea-a994-fa163e34d433 0xc0022386e7 0xc0022386e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002238780} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022387a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-07 12:00:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.004: INFO: Pod "nginx-deployment-5c98f8fb5-xq45c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xq45c,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-5c98f8fb5-xq45c,UID:5e5bd388-3145-11ea-a994-fa163e34d433,ResourceVersion:17472833,Generation:0,CreationTimestamp:2020-01-07 12:00:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 5da557bc-3145-11ea-a994-fa163e34d433 0xc002238867 0xc002238868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022388d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022388f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-07 12:00:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.004: INFO: Pod "nginx-deployment-85ddf47c5d-4hdw9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4hdw9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-4hdw9,UID:4237c964-3145-11ea-a994-fa163e34d433,ResourceVersion:17472752,Generation:0,CreationTimestamp:2020-01-07 12:00:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc002238a57 0xc002238a58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002238ac0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002238b10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-01-07 12:00:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-07 12:00:45 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://12de28000ca84720ef3f48cb75926715351ad6b25a0904b7e9e8df4e2555d3fc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.005: INFO: Pod "nginx-deployment-85ddf47c5d-4wxxt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4wxxt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-4wxxt,UID:621aca38-3145-11ea-a994-fa163e34d433,ResourceVersion:17472880,Generation:0,CreationTimestamp:2020-01-07 12:01:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc002238bd7 0xc002238bd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002238c70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002238c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:05 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-07 12:01:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.005: INFO: Pod "nginx-deployment-85ddf47c5d-5lfmt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5lfmt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-5lfmt,UID:424faac3-3145-11ea-a994-fa163e34d433,ResourceVersion:17472761,Generation:0,CreationTimestamp:2020-01-07 12:00:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc002238de7 0xc002238de8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002238e60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002238e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-01-07 12:00:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-07 12:00:52 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5c54fe49b4a0ba71d19234ddafbc2e6f41d2dd806795712bbf5b34497616adcd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.005: INFO: Pod "nginx-deployment-85ddf47c5d-827sn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-827sn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-827sn,UID:62901573-3145-11ea-a994-fa163e34d433,ResourceVersion:17472889,Generation:0,CreationTimestamp:2020-01-07 12:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc002238f67 0xc002238f68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002238ff0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002239010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.005: INFO: Pod "nginx-deployment-85ddf47c5d-82q2t" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-82q2t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-82q2t,UID:423eedd6-3145-11ea-a994-fa163e34d433,ResourceVersion:17472755,Generation:0,CreationTimestamp:2020-01-07 12:00:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc002239087 0xc002239088}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022390f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002239120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:14 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-01-07 12:00:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-07 12:00:52 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0e35a55bb0441f407b03af1a82b93546141343edd404cbcc18fc0c67ed9e66e6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.006: INFO: Pod "nginx-deployment-85ddf47c5d-96rpb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-96rpb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-96rpb,UID:628f9bd7-3145-11ea-a994-fa163e34d433,ResourceVersion:17472891,Generation:0,CreationTimestamp:2020-01-07 12:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc0022391e7 0xc0022391e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002239250} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002239270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.006: INFO: Pod "nginx-deployment-85ddf47c5d-9htlw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9htlw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-9htlw,UID:628ff616-3145-11ea-a994-fa163e34d433,ResourceVersion:17472894,Generation:0,CreationTimestamp:2020-01-07 12:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc002239507 0xc002239508}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002239580} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022395a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.006: INFO: Pod "nginx-deployment-85ddf47c5d-9zqr2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9zqr2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-9zqr2,UID:423ee8f4-3145-11ea-a994-fa163e34d433,ResourceVersion:17472773,Generation:0,CreationTimestamp:2020-01-07 12:00:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc002239617 0xc002239618}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022396a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ed0520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:14 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-01-07 12:00:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-07 12:00:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c50e31ea093b2589e5aaad83129ce07e22948f9cc0e0a6fd21e76cd6eeb8358b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.006: INFO: Pod "nginx-deployment-85ddf47c5d-b6bmr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b6bmr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-b6bmr,UID:6270c543-3145-11ea-a994-fa163e34d433,ResourceVersion:17472874,Generation:0,CreationTimestamp:2020-01-07 12:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc001ed0627 0xc001ed0628}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ed0710} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ed0730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.007: INFO: Pod "nginx-deployment-85ddf47c5d-dc4wh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dc4wh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-dc4wh,UID:6270cf21-3145-11ea-a994-fa163e34d433,ResourceVersion:17472886,Generation:0,CreationTimestamp:2020-01-07 12:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc001ed07b7 0xc001ed07b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ed08b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ed08e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.007: INFO: Pod "nginx-deployment-85ddf47c5d-dwnvb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dwnvb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-dwnvb,UID:623bf7b3-3145-11ea-a994-fa163e34d433,ResourceVersion:17472860,Generation:0,CreationTimestamp:2020-01-07 12:01:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc001ed0957 0xc001ed0958}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ed09d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ed0e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.007: INFO: Pod "nginx-deployment-85ddf47c5d-gs9tj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gs9tj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-gs9tj,UID:424effcf-3145-11ea-a994-fa163e34d433,ResourceVersion:17472776,Generation:0,CreationTimestamp:2020-01-07 12:00:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc001ed0ec7 0xc001ed0ec8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f10100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f10120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-01-07 12:00:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-07 12:00:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5fab60648f3c16831e68cdbb790ae4bd86ebeb6f23abad0180009e1e5ac3970f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.007: INFO: Pod "nginx-deployment-85ddf47c5d-jthlz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jthlz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-jthlz,UID:423ecb9e-3145-11ea-a994-fa163e34d433,ResourceVersion:17472742,Generation:0,CreationTimestamp:2020-01-07 12:00:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc001f10247 0xc001f10248}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f102b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f102d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-01-07 12:00:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-07 12:00:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1bebfbf96bb2f468317eaebf9f5d5b86693152abcba0059a148df50838c86a30}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.007: INFO: Pod "nginx-deployment-85ddf47c5d-kp54b" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kp54b,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-kp54b,UID:424f41b9-3145-11ea-a994-fa163e34d433,ResourceVersion:17472764,Generation:0,CreationTimestamp:2020-01-07 12:00:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc001f10397 0xc001f10398}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f10400} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f10420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-01-07 12:00:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-07 12:00:52 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9a254f7cd96614f01b108c49f159f69f0c49c7881a9ae4dd234f6ead5ef5f1a3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.008: INFO: Pod "nginx-deployment-85ddf47c5d-mhj9w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mhj9w,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-mhj9w,UID:623c2aa8-3145-11ea-a994-fa163e34d433,ResourceVersion:17472861,Generation:0,CreationTimestamp:2020-01-07 12:01:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc001f10517 0xc001f10518}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f10580} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f105a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.008: INFO: Pod "nginx-deployment-85ddf47c5d-nk2ws" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nk2ws,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-nk2ws,UID:6270f425-3145-11ea-a994-fa163e34d433,ResourceVersion:17472884,Generation:0,CreationTimestamp:2020-01-07 12:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc001f10617 0xc001f10618}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f10680} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f106a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.008: INFO: Pod "nginx-deployment-85ddf47c5d-phddx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-phddx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-phddx,UID:6270aab1-3145-11ea-a994-fa163e34d433,ResourceVersion:17472887,Generation:0,CreationTimestamp:2020-01-07 12:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc001f107a7 0xc001f107a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f10810} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f10830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.008: INFO: Pod "nginx-deployment-85ddf47c5d-rxtd8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rxtd8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-rxtd8,UID:628fcd71-3145-11ea-a994-fa163e34d433,ResourceVersion:17472892,Generation:0,CreationTimestamp:2020-01-07 12:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc001f108a7 0xc001f108a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f10910} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f10930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.009: INFO: Pod "nginx-deployment-85ddf47c5d-wzs8s" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wzs8s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-wzs8s,UID:4237a5fc-3145-11ea-a994-fa163e34d433,ResourceVersion:17472758,Generation:0,CreationTimestamp:2020-01-07 12:00:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc001f109a7 0xc001f109a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f10a90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f10d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:00:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-07 12:00:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-07 12:00:47 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2cca37083e002c66300f88e2180c70e02fddf3d4468a8ae72679f3c3858853ed}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  7 12:01:07.009: INFO: Pod "nginx-deployment-85ddf47c5d-x8lpl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x8lpl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r9dxb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r9dxb/pods/nginx-deployment-85ddf47c5d-x8lpl,UID:628f3271-3145-11ea-a994-fa163e34d433,ResourceVersion:17472893,Generation:0,CreationTimestamp:2020-01-07 12:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 422d1974-3145-11ea-a994-fa163e34d433 0xc001f10e27 0xc001f10e28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rs6zq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rs6zq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rs6zq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f10e90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f10fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:01:06 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:01:07.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-r9dxb" for this suite.
Jan  7 12:01:51.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:01:51.525: INFO: namespace: e2e-tests-deployment-r9dxb, resource: bindings, ignored listing per whitelist
Jan  7 12:01:51.600: INFO: namespace e2e-tests-deployment-r9dxb deletion completed in 44.522239339s

• [SLOW TEST:99.799 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:01:51.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan  7 12:01:52.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8vfbk'
Jan  7 12:01:53.211: INFO: stderr: ""
Jan  7 12:01:53.211: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  7 12:01:53.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8vfbk'
Jan  7 12:01:53.577: INFO: stderr: ""
Jan  7 12:01:53.577: INFO: stdout: "update-demo-nautilus-9k49g update-demo-nautilus-wz4s5 "
Jan  7 12:01:53.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9k49g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8vfbk'
Jan  7 12:01:53.757: INFO: stderr: ""
Jan  7 12:01:53.757: INFO: stdout: ""
Jan  7 12:01:53.757: INFO: update-demo-nautilus-9k49g is created but not running
Jan  7 12:01:58.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8vfbk'
Jan  7 12:01:58.931: INFO: stderr: ""
Jan  7 12:01:58.931: INFO: stdout: "update-demo-nautilus-9k49g update-demo-nautilus-wz4s5 "
Jan  7 12:01:58.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9k49g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8vfbk'
Jan  7 12:01:59.634: INFO: stderr: ""
Jan  7 12:01:59.634: INFO: stdout: ""
Jan  7 12:01:59.634: INFO: update-demo-nautilus-9k49g is created but not running
Jan  7 12:02:04.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8vfbk'
Jan  7 12:02:04.765: INFO: stderr: ""
Jan  7 12:02:04.765: INFO: stdout: "update-demo-nautilus-9k49g update-demo-nautilus-wz4s5 "
Jan  7 12:02:04.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9k49g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8vfbk'
Jan  7 12:02:04.886: INFO: stderr: ""
Jan  7 12:02:04.886: INFO: stdout: ""
Jan  7 12:02:04.886: INFO: update-demo-nautilus-9k49g is created but not running
Jan  7 12:02:09.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8vfbk'
Jan  7 12:02:10.031: INFO: stderr: ""
Jan  7 12:02:10.032: INFO: stdout: "update-demo-nautilus-9k49g update-demo-nautilus-wz4s5 "
Jan  7 12:02:10.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9k49g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8vfbk'
Jan  7 12:02:10.161: INFO: stderr: ""
Jan  7 12:02:10.162: INFO: stdout: ""
Jan  7 12:02:10.162: INFO: update-demo-nautilus-9k49g is created but not running
Jan  7 12:02:15.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8vfbk'
Jan  7 12:02:15.336: INFO: stderr: ""
Jan  7 12:02:15.336: INFO: stdout: "update-demo-nautilus-9k49g update-demo-nautilus-wz4s5 "
Jan  7 12:02:15.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9k49g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8vfbk'
Jan  7 12:02:15.493: INFO: stderr: ""
Jan  7 12:02:15.493: INFO: stdout: "true"
Jan  7 12:02:15.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9k49g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8vfbk'
Jan  7 12:02:15.653: INFO: stderr: ""
Jan  7 12:02:15.653: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  7 12:02:15.653: INFO: validating pod update-demo-nautilus-9k49g
Jan  7 12:02:15.662: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  7 12:02:15.662: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  7 12:02:15.662: INFO: update-demo-nautilus-9k49g is verified up and running
Jan  7 12:02:15.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wz4s5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8vfbk'
Jan  7 12:02:15.856: INFO: stderr: ""
Jan  7 12:02:15.857: INFO: stdout: "true"
Jan  7 12:02:15.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wz4s5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8vfbk'
Jan  7 12:02:16.032: INFO: stderr: ""
Jan  7 12:02:16.032: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  7 12:02:16.032: INFO: validating pod update-demo-nautilus-wz4s5
Jan  7 12:02:16.048: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  7 12:02:16.048: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  7 12:02:16.048: INFO: update-demo-nautilus-wz4s5 is verified up and running
STEP: using delete to clean up resources
Jan  7 12:02:16.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8vfbk'
Jan  7 12:02:16.263: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  7 12:02:16.263: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  7 12:02:16.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-8vfbk'
Jan  7 12:02:16.472: INFO: stderr: "No resources found.\n"
Jan  7 12:02:16.473: INFO: stdout: ""
Jan  7 12:02:16.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-8vfbk -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  7 12:02:16.645: INFO: stderr: ""
Jan  7 12:02:16.646: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:02:16.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8vfbk" for this suite.
Jan  7 12:02:40.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:02:40.807: INFO: namespace: e2e-tests-kubectl-8vfbk, resource: bindings, ignored listing per whitelist
Jan  7 12:02:40.915: INFO: namespace e2e-tests-kubectl-8vfbk deletion completed in 24.249758784s

• [SLOW TEST:49.315 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:02:40.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-k6ggv
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-k6ggv
STEP: Deleting pre-stop pod
Jan  7 12:03:08.534: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:03:08.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-k6ggv" for this suite.
Jan  7 12:03:48.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:03:48.775: INFO: namespace: e2e-tests-prestop-k6ggv, resource: bindings, ignored listing per whitelist
Jan  7 12:03:48.796: INFO: namespace e2e-tests-prestop-k6ggv deletion completed in 40.177558303s

• [SLOW TEST:67.880 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:03:48.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  7 12:03:49.164: INFO: Waiting up to 5m0s for pod "pod-c37dd789-3145-11ea-8b51-0242ac110005" in namespace "e2e-tests-emptydir-hkn4k" to be "success or failure"
Jan  7 12:03:49.180: INFO: Pod "pod-c37dd789-3145-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.122922ms
Jan  7 12:03:52.099: INFO: Pod "pod-c37dd789-3145-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.935157776s
Jan  7 12:03:54.118: INFO: Pod "pod-c37dd789-3145-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.954123797s
Jan  7 12:03:56.595: INFO: Pod "pod-c37dd789-3145-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.431194219s
Jan  7 12:03:58.644: INFO: Pod "pod-c37dd789-3145-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.480063892s
Jan  7 12:04:00.671: INFO: Pod "pod-c37dd789-3145-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.507026216s
STEP: Saw pod success
Jan  7 12:04:00.671: INFO: Pod "pod-c37dd789-3145-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:04:00.688: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c37dd789-3145-11ea-8b51-0242ac110005 container test-container: 
STEP: delete the pod
Jan  7 12:04:00.860: INFO: Waiting for pod pod-c37dd789-3145-11ea-8b51-0242ac110005 to disappear
Jan  7 12:04:00.871: INFO: Pod pod-c37dd789-3145-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:04:00.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hkn4k" for this suite.
Jan  7 12:04:06.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:04:07.159: INFO: namespace: e2e-tests-emptydir-hkn4k, resource: bindings, ignored listing per whitelist
Jan  7 12:04:07.291: INFO: namespace e2e-tests-emptydir-hkn4k deletion completed in 6.413217524s

• [SLOW TEST:18.495 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:04:07.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:04:07.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-7rjs6" for this suite.
Jan  7 12:04:29.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:04:29.803: INFO: namespace: e2e-tests-pods-7rjs6, resource: bindings, ignored listing per whitelist
Jan  7 12:04:29.814: INFO: namespace e2e-tests-pods-7rjs6 deletion completed in 22.235078469s

• [SLOW TEST:22.522 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:04:29.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-w742m in namespace e2e-tests-proxy-wv576
I0107 12:04:30.155718       8 runners.go:184] Created replication controller with name: proxy-service-w742m, namespace: e2e-tests-proxy-wv576, replica count: 1
I0107 12:04:31.206882       8 runners.go:184] proxy-service-w742m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 12:04:32.207389       8 runners.go:184] proxy-service-w742m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 12:04:33.207975       8 runners.go:184] proxy-service-w742m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 12:04:34.208932       8 runners.go:184] proxy-service-w742m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 12:04:35.209480       8 runners.go:184] proxy-service-w742m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 12:04:36.209964       8 runners.go:184] proxy-service-w742m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 12:04:37.210380       8 runners.go:184] proxy-service-w742m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 12:04:38.210967       8 runners.go:184] proxy-service-w742m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 12:04:39.211548       8 runners.go:184] proxy-service-w742m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 12:04:40.212023       8 runners.go:184] proxy-service-w742m Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0107 12:04:41.212485       8 runners.go:184] proxy-service-w742m Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0107 12:04:42.212904       8 runners.go:184] proxy-service-w742m Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0107 12:04:43.213522       8 runners.go:184] proxy-service-w742m Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0107 12:04:44.214299       8 runners.go:184] proxy-service-w742m Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  7 12:04:44.229: INFO: setup took 14.191040878s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan  7 12:04:44.262: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-wv576/pods/proxy-service-w742m-r7sgb:162/proxy/: bar (200; 31.741964ms)
Jan  7 12:04:44.263: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-wv576/pods/http:proxy-service-w742m-r7sgb:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  7 12:05:09.299: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3501fab-3145-11ea-8b51-0242ac110005" in namespace "e2e-tests-downward-api-bpqfv" to be "success or failure"
Jan  7 12:05:09.304: INFO: Pod "downwardapi-volume-f3501fab-3145-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.791712ms
Jan  7 12:05:11.597: INFO: Pod "downwardapi-volume-f3501fab-3145-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297407527s
Jan  7 12:05:13.630: INFO: Pod "downwardapi-volume-f3501fab-3145-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330911527s
Jan  7 12:05:16.259: INFO: Pod "downwardapi-volume-f3501fab-3145-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.959810041s
Jan  7 12:05:18.284: INFO: Pod "downwardapi-volume-f3501fab-3145-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.984322471s
Jan  7 12:05:20.300: INFO: Pod "downwardapi-volume-f3501fab-3145-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.000168785s
STEP: Saw pod success
Jan  7 12:05:20.300: INFO: Pod "downwardapi-volume-f3501fab-3145-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:05:20.307: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f3501fab-3145-11ea-8b51-0242ac110005 container client-container: 
STEP: delete the pod
Jan  7 12:05:20.598: INFO: Waiting for pod downwardapi-volume-f3501fab-3145-11ea-8b51-0242ac110005 to disappear
Jan  7 12:05:20.641: INFO: Pod downwardapi-volume-f3501fab-3145-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:05:20.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bpqfv" for this suite.
Jan  7 12:05:26.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:05:26.886: INFO: namespace: e2e-tests-downward-api-bpqfv, resource: bindings, ignored listing per whitelist
Jan  7 12:05:26.936: INFO: namespace e2e-tests-downward-api-bpqfv deletion completed in 6.231184546s

• [SLOW TEST:17.827 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:05:26.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-vsb4
STEP: Creating a pod to test atomic-volume-subpath
Jan  7 12:05:27.176: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vsb4" in namespace "e2e-tests-subpath-jj95d" to be "success or failure"
Jan  7 12:05:27.183: INFO: Pod "pod-subpath-test-configmap-vsb4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.411481ms
Jan  7 12:05:29.386: INFO: Pod "pod-subpath-test-configmap-vsb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210062064s
Jan  7 12:05:31.416: INFO: Pod "pod-subpath-test-configmap-vsb4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240762532s
Jan  7 12:05:33.901: INFO: Pod "pod-subpath-test-configmap-vsb4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.725382973s
Jan  7 12:05:35.922: INFO: Pod "pod-subpath-test-configmap-vsb4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.746126466s
Jan  7 12:05:37.993: INFO: Pod "pod-subpath-test-configmap-vsb4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.817232544s
Jan  7 12:05:40.185: INFO: Pod "pod-subpath-test-configmap-vsb4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.009626361s
Jan  7 12:05:42.216: INFO: Pod "pod-subpath-test-configmap-vsb4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.040117607s
Jan  7 12:05:44.236: INFO: Pod "pod-subpath-test-configmap-vsb4": Phase="Running", Reason="", readiness=false. Elapsed: 17.060755031s
Jan  7 12:05:46.257: INFO: Pod "pod-subpath-test-configmap-vsb4": Phase="Running", Reason="", readiness=false. Elapsed: 19.08100797s
Jan  7 12:05:48.268: INFO: Pod "pod-subpath-test-configmap-vsb4": Phase="Running", Reason="", readiness=false. Elapsed: 21.092520692s
Jan  7 12:05:50.313: INFO: Pod "pod-subpath-test-configmap-vsb4": Phase="Running", Reason="", readiness=false. Elapsed: 23.136814517s
Jan  7 12:05:52.328: INFO: Pod "pod-subpath-test-configmap-vsb4": Phase="Running", Reason="", readiness=false. Elapsed: 25.152635973s
Jan  7 12:05:54.356: INFO: Pod "pod-subpath-test-configmap-vsb4": Phase="Running", Reason="", readiness=false. Elapsed: 27.180359221s
Jan  7 12:05:56.378: INFO: Pod "pod-subpath-test-configmap-vsb4": Phase="Running", Reason="", readiness=false. Elapsed: 29.202094046s
Jan  7 12:05:58.397: INFO: Pod "pod-subpath-test-configmap-vsb4": Phase="Running", Reason="", readiness=false. Elapsed: 31.221341557s
Jan  7 12:06:00.592: INFO: Pod "pod-subpath-test-configmap-vsb4": Phase="Running", Reason="", readiness=false. Elapsed: 33.41592682s
Jan  7 12:06:02.615: INFO: Pod "pod-subpath-test-configmap-vsb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.438908062s
STEP: Saw pod success
Jan  7 12:06:02.615: INFO: Pod "pod-subpath-test-configmap-vsb4" satisfied condition "success or failure"
Jan  7 12:06:02.623: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-vsb4 container test-container-subpath-configmap-vsb4: 
STEP: delete the pod
Jan  7 12:06:02.715: INFO: Waiting for pod pod-subpath-test-configmap-vsb4 to disappear
Jan  7 12:06:02.806: INFO: Pod pod-subpath-test-configmap-vsb4 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-vsb4
Jan  7 12:06:02.806: INFO: Deleting pod "pod-subpath-test-configmap-vsb4" in namespace "e2e-tests-subpath-jj95d"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:06:02.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-jj95d" for this suite.
Jan  7 12:06:08.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:06:09.136: INFO: namespace: e2e-tests-subpath-jj95d, resource: bindings, ignored listing per whitelist
Jan  7 12:06:09.207: INFO: namespace e2e-tests-subpath-jj95d deletion completed in 6.382293429s

• [SLOW TEST:42.271 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:06:09.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan  7 12:06:22.624: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:06:23.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-rt7dp" for this suite.
Jan  7 12:06:50.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:06:50.679: INFO: namespace: e2e-tests-replicaset-rt7dp, resource: bindings, ignored listing per whitelist
Jan  7 12:06:50.813: INFO: namespace e2e-tests-replicaset-rt7dp deletion completed in 26.535051931s

• [SLOW TEST:41.605 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:06:50.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-2ff89960-3146-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  7 12:06:51.093: INFO: Waiting up to 5m0s for pod "pod-secrets-2ffa36fe-3146-11ea-8b51-0242ac110005" in namespace "e2e-tests-secrets-lhtc8" to be "success or failure"
Jan  7 12:06:51.237: INFO: Pod "pod-secrets-2ffa36fe-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 143.439684ms
Jan  7 12:06:53.376: INFO: Pod "pod-secrets-2ffa36fe-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.282814579s
Jan  7 12:06:55.391: INFO: Pod "pod-secrets-2ffa36fe-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.298134402s
Jan  7 12:06:57.450: INFO: Pod "pod-secrets-2ffa36fe-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.356493658s
Jan  7 12:06:59.954: INFO: Pod "pod-secrets-2ffa36fe-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.860747973s
Jan  7 12:07:02.245: INFO: Pod "pod-secrets-2ffa36fe-3146-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.151324197s
STEP: Saw pod success
Jan  7 12:07:02.245: INFO: Pod "pod-secrets-2ffa36fe-3146-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:07:02.255: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-2ffa36fe-3146-11ea-8b51-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  7 12:07:02.757: INFO: Waiting for pod pod-secrets-2ffa36fe-3146-11ea-8b51-0242ac110005 to disappear
Jan  7 12:07:02.766: INFO: Pod pod-secrets-2ffa36fe-3146-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:07:02.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-lhtc8" for this suite.
Jan  7 12:07:08.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:07:08.960: INFO: namespace: e2e-tests-secrets-lhtc8, resource: bindings, ignored listing per whitelist
Jan  7 12:07:09.041: INFO: namespace e2e-tests-secrets-lhtc8 deletion completed in 6.254652826s

• [SLOW TEST:18.228 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:07:09.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  7 12:07:09.256: INFO: Waiting up to 5m0s for pod "downward-api-3ace6cda-3146-11ea-8b51-0242ac110005" in namespace "e2e-tests-downward-api-txmwj" to be "success or failure"
Jan  7 12:07:09.275: INFO: Pod "downward-api-3ace6cda-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.825121ms
Jan  7 12:07:11.287: INFO: Pod "downward-api-3ace6cda-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030970516s
Jan  7 12:07:13.307: INFO: Pod "downward-api-3ace6cda-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051018478s
Jan  7 12:07:16.005: INFO: Pod "downward-api-3ace6cda-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.748865796s
Jan  7 12:07:18.032: INFO: Pod "downward-api-3ace6cda-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.776351372s
Jan  7 12:07:20.053: INFO: Pod "downward-api-3ace6cda-3146-11ea-8b51-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.79717455s
Jan  7 12:07:22.078: INFO: Pod "downward-api-3ace6cda-3146-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.8221055s
STEP: Saw pod success
Jan  7 12:07:22.078: INFO: Pod "downward-api-3ace6cda-3146-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:07:22.085: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-3ace6cda-3146-11ea-8b51-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  7 12:07:22.162: INFO: Waiting for pod downward-api-3ace6cda-3146-11ea-8b51-0242ac110005 to disappear
Jan  7 12:07:22.289: INFO: Pod downward-api-3ace6cda-3146-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:07:22.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-txmwj" for this suite.
Jan  7 12:07:28.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:07:28.447: INFO: namespace: e2e-tests-downward-api-txmwj, resource: bindings, ignored listing per whitelist
Jan  7 12:07:28.572: INFO: namespace e2e-tests-downward-api-txmwj deletion completed in 6.270303874s

• [SLOW TEST:19.530 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:07:28.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  7 12:07:28.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:07:39.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-stdjg" for this suite.
Jan  7 12:08:23.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:08:23.563: INFO: namespace: e2e-tests-pods-stdjg, resource: bindings, ignored listing per whitelist
Jan  7 12:08:23.643: INFO: namespace e2e-tests-pods-stdjg deletion completed in 44.285481144s

• [SLOW TEST:55.070 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:08:23.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-67685a39-3146-11ea-8b51-0242ac110005
STEP: Creating secret with name s-test-opt-upd-67685f65-3146-11ea-8b51-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-67685a39-3146-11ea-8b51-0242ac110005
STEP: Updating secret s-test-opt-upd-67685f65-3146-11ea-8b51-0242ac110005
STEP: Creating secret with name s-test-opt-create-67685ff3-3146-11ea-8b51-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:10:10.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wrwrz" for this suite.
Jan  7 12:10:50.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:10:50.484: INFO: namespace: e2e-tests-projected-wrwrz, resource: bindings, ignored listing per whitelist
Jan  7 12:10:50.638: INFO: namespace e2e-tests-projected-wrwrz deletion completed in 40.489601946s

• [SLOW TEST:146.994 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:10:50.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  7 12:10:50.870: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bedb96ca-3146-11ea-8b51-0242ac110005" in namespace "e2e-tests-downward-api-9kmp4" to be "success or failure"
Jan  7 12:10:50.905: INFO: Pod "downwardapi-volume-bedb96ca-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.897596ms
Jan  7 12:10:52.977: INFO: Pod "downwardapi-volume-bedb96ca-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106802478s
Jan  7 12:10:55.014: INFO: Pod "downwardapi-volume-bedb96ca-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143664994s
Jan  7 12:10:57.621: INFO: Pod "downwardapi-volume-bedb96ca-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.750564256s
Jan  7 12:10:59.638: INFO: Pod "downwardapi-volume-bedb96ca-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.767728966s
Jan  7 12:11:01.655: INFO: Pod "downwardapi-volume-bedb96ca-3146-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.784687394s
STEP: Saw pod success
Jan  7 12:11:01.656: INFO: Pod "downwardapi-volume-bedb96ca-3146-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:11:01.663: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-bedb96ca-3146-11ea-8b51-0242ac110005 container client-container: 
STEP: delete the pod
Jan  7 12:11:02.876: INFO: Waiting for pod downwardapi-volume-bedb96ca-3146-11ea-8b51-0242ac110005 to disappear
Jan  7 12:11:02.908: INFO: Pod downwardapi-volume-bedb96ca-3146-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:11:02.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9kmp4" for this suite.
Jan  7 12:11:09.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:11:09.799: INFO: namespace: e2e-tests-downward-api-9kmp4, resource: bindings, ignored listing per whitelist
Jan  7 12:11:09.844: INFO: namespace e2e-tests-downward-api-9kmp4 deletion completed in 6.915284651s

• [SLOW TEST:19.205 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:11:09.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-dvnj
STEP: Creating a pod to test atomic-volume-subpath
Jan  7 12:11:10.098: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dvnj" in namespace "e2e-tests-subpath-2t792" to be "success or failure"
Jan  7 12:11:10.159: INFO: Pod "pod-subpath-test-configmap-dvnj": Phase="Pending", Reason="", readiness=false. Elapsed: 60.792676ms
Jan  7 12:11:12.175: INFO: Pod "pod-subpath-test-configmap-dvnj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076284351s
Jan  7 12:11:14.194: INFO: Pod "pod-subpath-test-configmap-dvnj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095674014s
Jan  7 12:11:16.208: INFO: Pod "pod-subpath-test-configmap-dvnj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109354096s
Jan  7 12:11:18.273: INFO: Pod "pod-subpath-test-configmap-dvnj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.175122521s
Jan  7 12:11:20.290: INFO: Pod "pod-subpath-test-configmap-dvnj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.19168317s
Jan  7 12:11:22.308: INFO: Pod "pod-subpath-test-configmap-dvnj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.209951945s
Jan  7 12:11:24.371: INFO: Pod "pod-subpath-test-configmap-dvnj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.272425121s
Jan  7 12:11:26.440: INFO: Pod "pod-subpath-test-configmap-dvnj": Phase="Pending", Reason="", readiness=false. Elapsed: 16.341539328s
Jan  7 12:11:28.485: INFO: Pod "pod-subpath-test-configmap-dvnj": Phase="Running", Reason="", readiness=false. Elapsed: 18.386775415s
Jan  7 12:11:30.523: INFO: Pod "pod-subpath-test-configmap-dvnj": Phase="Running", Reason="", readiness=false. Elapsed: 20.424881182s
Jan  7 12:11:32.568: INFO: Pod "pod-subpath-test-configmap-dvnj": Phase="Running", Reason="", readiness=false. Elapsed: 22.470011344s
Jan  7 12:11:34.606: INFO: Pod "pod-subpath-test-configmap-dvnj": Phase="Running", Reason="", readiness=false. Elapsed: 24.507435968s
Jan  7 12:11:36.624: INFO: Pod "pod-subpath-test-configmap-dvnj": Phase="Running", Reason="", readiness=false. Elapsed: 26.525472024s
Jan  7 12:11:38.638: INFO: Pod "pod-subpath-test-configmap-dvnj": Phase="Running", Reason="", readiness=false. Elapsed: 28.539454458s
Jan  7 12:11:40.654: INFO: Pod "pod-subpath-test-configmap-dvnj": Phase="Running", Reason="", readiness=false. Elapsed: 30.555788646s
Jan  7 12:11:42.677: INFO: Pod "pod-subpath-test-configmap-dvnj": Phase="Running", Reason="", readiness=false. Elapsed: 32.578743987s
Jan  7 12:11:44.786: INFO: Pod "pod-subpath-test-configmap-dvnj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.6882782s
STEP: Saw pod success
Jan  7 12:11:44.787: INFO: Pod "pod-subpath-test-configmap-dvnj" satisfied condition "success or failure"
Jan  7 12:11:44.795: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-dvnj container test-container-subpath-configmap-dvnj: 
STEP: delete the pod
Jan  7 12:11:45.077: INFO: Waiting for pod pod-subpath-test-configmap-dvnj to disappear
Jan  7 12:11:45.113: INFO: Pod pod-subpath-test-configmap-dvnj no longer exists
STEP: Deleting pod pod-subpath-test-configmap-dvnj
Jan  7 12:11:45.114: INFO: Deleting pod "pod-subpath-test-configmap-dvnj" in namespace "e2e-tests-subpath-2t792"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:11:45.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-2t792" for this suite.
Jan  7 12:11:53.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:11:53.280: INFO: namespace: e2e-tests-subpath-2t792, resource: bindings, ignored listing per whitelist
Jan  7 12:11:53.355: INFO: namespace e2e-tests-subpath-2t792 deletion completed in 8.220274456s

• [SLOW TEST:43.511 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:11:53.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  7 12:11:53.574: INFO: Waiting up to 5m0s for pod "pod-e4471522-3146-11ea-8b51-0242ac110005" in namespace "e2e-tests-emptydir-wvr62" to be "success or failure"
Jan  7 12:11:53.590: INFO: Pod "pod-e4471522-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.544248ms
Jan  7 12:11:55.612: INFO: Pod "pod-e4471522-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037956017s
Jan  7 12:11:57.633: INFO: Pod "pod-e4471522-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05840307s
Jan  7 12:11:59.748: INFO: Pod "pod-e4471522-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.173156921s
Jan  7 12:12:01.769: INFO: Pod "pod-e4471522-3146-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.194404794s
Jan  7 12:12:03.794: INFO: Pod "pod-e4471522-3146-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.220074297s
STEP: Saw pod success
Jan  7 12:12:03.795: INFO: Pod "pod-e4471522-3146-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:12:03.810: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e4471522-3146-11ea-8b51-0242ac110005 container test-container: 
STEP: delete the pod
Jan  7 12:12:03.998: INFO: Waiting for pod pod-e4471522-3146-11ea-8b51-0242ac110005 to disappear
Jan  7 12:12:04.011: INFO: Pod pod-e4471522-3146-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:12:04.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wvr62" for this suite.
Jan  7 12:12:10.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:12:10.276: INFO: namespace: e2e-tests-emptydir-wvr62, resource: bindings, ignored listing per whitelist
Jan  7 12:12:10.369: INFO: namespace e2e-tests-emptydir-wvr62 deletion completed in 6.345629836s

• [SLOW TEST:17.012 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:12:10.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan  7 12:12:10.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xbtj6'
Jan  7 12:12:12.901: INFO: stderr: ""
Jan  7 12:12:12.901: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  7 12:12:13.920: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:12:13.920: INFO: Found 0 / 1
Jan  7 12:12:15.657: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:12:15.658: INFO: Found 0 / 1
Jan  7 12:12:15.962: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:12:15.962: INFO: Found 0 / 1
Jan  7 12:12:16.919: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:12:16.919: INFO: Found 0 / 1
Jan  7 12:12:18.557: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:12:18.557: INFO: Found 0 / 1
Jan  7 12:12:19.496: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:12:19.496: INFO: Found 0 / 1
Jan  7 12:12:20.026: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:12:20.026: INFO: Found 0 / 1
Jan  7 12:12:20.965: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:12:20.965: INFO: Found 0 / 1
Jan  7 12:12:21.923: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:12:21.923: INFO: Found 0 / 1
Jan  7 12:12:22.919: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:12:22.919: INFO: Found 0 / 1
Jan  7 12:12:23.925: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:12:23.925: INFO: Found 1 / 1
Jan  7 12:12:23.926: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan  7 12:12:23.934: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:12:23.934: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  7 12:12:23.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-9qbr6 --namespace=e2e-tests-kubectl-xbtj6 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan  7 12:12:24.122: INFO: stderr: ""
Jan  7 12:12:24.123: INFO: stdout: "pod/redis-master-9qbr6 patched\n"
STEP: checking annotations
Jan  7 12:12:24.135: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:12:24.135: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:12:24.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xbtj6" for this suite.
Jan  7 12:12:48.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:12:48.344: INFO: namespace: e2e-tests-kubectl-xbtj6, resource: bindings, ignored listing per whitelist
Jan  7 12:12:48.376: INFO: namespace e2e-tests-kubectl-xbtj6 deletion completed in 24.23471161s

• [SLOW TEST:38.007 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:12:48.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jan  7 12:12:48.596: INFO: Waiting up to 5m0s for pod "client-containers-051239bb-3147-11ea-8b51-0242ac110005" in namespace "e2e-tests-containers-pv2zt" to be "success or failure"
Jan  7 12:12:48.604: INFO: Pod "client-containers-051239bb-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.657973ms
Jan  7 12:12:50.625: INFO: Pod "client-containers-051239bb-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029243842s
Jan  7 12:12:52.640: INFO: Pod "client-containers-051239bb-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044042515s
Jan  7 12:12:54.787: INFO: Pod "client-containers-051239bb-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.19071368s
Jan  7 12:12:56.887: INFO: Pod "client-containers-051239bb-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.290603448s
Jan  7 12:12:58.907: INFO: Pod "client-containers-051239bb-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.311106016s
Jan  7 12:13:00.974: INFO: Pod "client-containers-051239bb-3147-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.378265515s
STEP: Saw pod success
Jan  7 12:13:00.975: INFO: Pod "client-containers-051239bb-3147-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:13:01.018: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-051239bb-3147-11ea-8b51-0242ac110005 container test-container: 
STEP: delete the pod
Jan  7 12:13:01.279: INFO: Waiting for pod client-containers-051239bb-3147-11ea-8b51-0242ac110005 to disappear
Jan  7 12:13:01.348: INFO: Pod client-containers-051239bb-3147-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:13:01.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-pv2zt" for this suite.
Jan  7 12:13:07.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:13:07.491: INFO: namespace: e2e-tests-containers-pv2zt, resource: bindings, ignored listing per whitelist
Jan  7 12:13:07.569: INFO: namespace e2e-tests-containers-pv2zt deletion completed in 6.209985434s

• [SLOW TEST:19.192 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:13:07.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-1084e286-3147-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  7 12:13:07.808: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-108617dc-3147-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-mrh2j" to be "success or failure"
Jan  7 12:13:07.905: INFO: Pod "pod-projected-configmaps-108617dc-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 96.963337ms
Jan  7 12:13:09.923: INFO: Pod "pod-projected-configmaps-108617dc-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114773091s
Jan  7 12:13:11.958: INFO: Pod "pod-projected-configmaps-108617dc-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14984686s
Jan  7 12:13:14.196: INFO: Pod "pod-projected-configmaps-108617dc-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.388253448s
Jan  7 12:13:16.220: INFO: Pod "pod-projected-configmaps-108617dc-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.412051795s
Jan  7 12:13:18.234: INFO: Pod "pod-projected-configmaps-108617dc-3147-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.425923981s
STEP: Saw pod success
Jan  7 12:13:18.234: INFO: Pod "pod-projected-configmaps-108617dc-3147-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:13:18.238: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-108617dc-3147-11ea-8b51-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  7 12:13:19.022: INFO: Waiting for pod pod-projected-configmaps-108617dc-3147-11ea-8b51-0242ac110005 to disappear
Jan  7 12:13:19.070: INFO: Pod pod-projected-configmaps-108617dc-3147-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:13:19.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mrh2j" for this suite.
Jan  7 12:13:25.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:13:25.308: INFO: namespace: e2e-tests-projected-mrh2j, resource: bindings, ignored listing per whitelist
Jan  7 12:13:25.378: INFO: namespace e2e-tests-projected-mrh2j deletion completed in 6.291271835s

• [SLOW TEST:17.809 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:13:25.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  7 12:13:25.602: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b202b23-3147-11ea-8b51-0242ac110005" in namespace "e2e-tests-downward-api-nm62m" to be "success or failure"
Jan  7 12:13:25.910: INFO: Pod "downwardapi-volume-1b202b23-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 308.186626ms
Jan  7 12:13:28.711: INFO: Pod "downwardapi-volume-1b202b23-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.10963274s
Jan  7 12:13:30.723: INFO: Pod "downwardapi-volume-1b202b23-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.121414884s
Jan  7 12:13:32.738: INFO: Pod "downwardapi-volume-1b202b23-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.136706286s
Jan  7 12:13:34.785: INFO: Pod "downwardapi-volume-1b202b23-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.18293545s
Jan  7 12:13:36.799: INFO: Pod "downwardapi-volume-1b202b23-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.197437564s
Jan  7 12:13:38.811: INFO: Pod "downwardapi-volume-1b202b23-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.209672554s
Jan  7 12:13:40.835: INFO: Pod "downwardapi-volume-1b202b23-3147-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.233193778s
STEP: Saw pod success
Jan  7 12:13:40.835: INFO: Pod "downwardapi-volume-1b202b23-3147-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:13:40.858: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1b202b23-3147-11ea-8b51-0242ac110005 container client-container: 
STEP: delete the pod
Jan  7 12:13:40.952: INFO: Waiting for pod downwardapi-volume-1b202b23-3147-11ea-8b51-0242ac110005 to disappear
Jan  7 12:13:41.013: INFO: Pod downwardapi-volume-1b202b23-3147-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:13:41.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nm62m" for this suite.
Jan  7 12:13:47.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:13:47.310: INFO: namespace: e2e-tests-downward-api-nm62m, resource: bindings, ignored listing per whitelist
Jan  7 12:13:47.323: INFO: namespace e2e-tests-downward-api-nm62m deletion completed in 6.267502096s

• [SLOW TEST:21.944 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:13:47.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan  7 12:13:47.725: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-pzszh,SelfLink:/api/v1/namespaces/e2e-tests-watch-pzszh/configmaps/e2e-watch-test-label-changed,UID:284cc049-3147-11ea-a994-fa163e34d433,ResourceVersion:17474560,Generation:0,CreationTimestamp:2020-01-07 12:13:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  7 12:13:47.726: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-pzszh,SelfLink:/api/v1/namespaces/e2e-tests-watch-pzszh/configmaps/e2e-watch-test-label-changed,UID:284cc049-3147-11ea-a994-fa163e34d433,ResourceVersion:17474561,Generation:0,CreationTimestamp:2020-01-07 12:13:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  7 12:13:47.726: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-pzszh,SelfLink:/api/v1/namespaces/e2e-tests-watch-pzszh/configmaps/e2e-watch-test-label-changed,UID:284cc049-3147-11ea-a994-fa163e34d433,ResourceVersion:17474562,Generation:0,CreationTimestamp:2020-01-07 12:13:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan  7 12:13:57.827: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-pzszh,SelfLink:/api/v1/namespaces/e2e-tests-watch-pzszh/configmaps/e2e-watch-test-label-changed,UID:284cc049-3147-11ea-a994-fa163e34d433,ResourceVersion:17474576,Generation:0,CreationTimestamp:2020-01-07 12:13:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  7 12:13:57.827: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-pzszh,SelfLink:/api/v1/namespaces/e2e-tests-watch-pzszh/configmaps/e2e-watch-test-label-changed,UID:284cc049-3147-11ea-a994-fa163e34d433,ResourceVersion:17474577,Generation:0,CreationTimestamp:2020-01-07 12:13:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan  7 12:13:57.828: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-pzszh,SelfLink:/api/v1/namespaces/e2e-tests-watch-pzszh/configmaps/e2e-watch-test-label-changed,UID:284cc049-3147-11ea-a994-fa163e34d433,ResourceVersion:17474578,Generation:0,CreationTimestamp:2020-01-07 12:13:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:13:57.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-pzszh" for this suite.
Jan  7 12:14:03.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:14:04.050: INFO: namespace: e2e-tests-watch-pzszh, resource: bindings, ignored listing per whitelist
Jan  7 12:14:04.107: INFO: namespace e2e-tests-watch-pzszh deletion completed in 6.269297884s

• [SLOW TEST:16.783 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:14:04.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0107 12:14:47.165483       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  7 12:14:47.165: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:14:47.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-lstwg" for this suite.
Jan  7 12:14:55.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:14:55.400: INFO: namespace: e2e-tests-gc-lstwg, resource: bindings, ignored listing per whitelist
Jan  7 12:14:55.465: INFO: namespace e2e-tests-gc-lstwg deletion completed in 8.294140143s

• [SLOW TEST:51.358 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:14:55.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan  7 12:15:05.393: INFO: 10 pods remaining
Jan  7 12:15:05.393: INFO: 10 pods has nil DeletionTimestamp
Jan  7 12:15:05.393: INFO: 
Jan  7 12:15:11.044: INFO: 9 pods remaining
Jan  7 12:15:11.045: INFO: 0 pods has nil DeletionTimestamp
Jan  7 12:15:11.045: INFO: 
Jan  7 12:15:12.866: INFO: 0 pods remaining
Jan  7 12:15:12.867: INFO: 0 pods has nil DeletionTimestamp
Jan  7 12:15:12.867: INFO: 
Jan  7 12:15:14.903: INFO: 0 pods remaining
Jan  7 12:15:14.903: INFO: 0 pods has nil DeletionTimestamp
Jan  7 12:15:14.903: INFO: 
STEP: Gathering metrics
W0107 12:15:15.766844       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  7 12:15:15.767: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:15:15.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-gcndp" for this suite.
Jan  7 12:15:32.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:15:34.237: INFO: namespace: e2e-tests-gc-gcndp, resource: bindings, ignored listing per whitelist
Jan  7 12:15:34.361: INFO: namespace e2e-tests-gc-gcndp deletion completed in 18.585077253s

• [SLOW TEST:38.896 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:15:34.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  7 12:15:35.036: INFO: Waiting up to 5m0s for pod "pod-68443570-3147-11ea-8b51-0242ac110005" in namespace "e2e-tests-emptydir-jxzvp" to be "success or failure"
Jan  7 12:15:35.272: INFO: Pod "pod-68443570-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 235.341726ms
Jan  7 12:15:37.288: INFO: Pod "pod-68443570-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.251608197s
Jan  7 12:15:39.840: INFO: Pod "pod-68443570-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.803812847s
Jan  7 12:15:41.855: INFO: Pod "pod-68443570-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.819120373s
Jan  7 12:15:43.893: INFO: Pod "pod-68443570-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.856907456s
Jan  7 12:15:46.210: INFO: Pod "pod-68443570-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.173156534s
Jan  7 12:15:48.230: INFO: Pod "pod-68443570-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.193132672s
Jan  7 12:15:50.244: INFO: Pod "pod-68443570-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.207189961s
Jan  7 12:15:52.258: INFO: Pod "pod-68443570-3147-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.222040285s
STEP: Saw pod success
Jan  7 12:15:52.259: INFO: Pod "pod-68443570-3147-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:15:52.266: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-68443570-3147-11ea-8b51-0242ac110005 container test-container: 
STEP: delete the pod
Jan  7 12:15:52.490: INFO: Waiting for pod pod-68443570-3147-11ea-8b51-0242ac110005 to disappear
Jan  7 12:15:52.499: INFO: Pod pod-68443570-3147-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:15:52.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-jxzvp" for this suite.
Jan  7 12:15:58.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:15:58.721: INFO: namespace: e2e-tests-emptydir-jxzvp, resource: bindings, ignored listing per whitelist
Jan  7 12:15:58.829: INFO: namespace e2e-tests-emptydir-jxzvp deletion completed in 6.32203317s

• [SLOW TEST:24.467 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:15:58.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  7 12:15:59.145: INFO: Waiting up to 5m0s for pod "downwardapi-volume-769868e3-3147-11ea-8b51-0242ac110005" in namespace "e2e-tests-downward-api-rp4rn" to be "success or failure"
Jan  7 12:15:59.168: INFO: Pod "downwardapi-volume-769868e3-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.956828ms
Jan  7 12:16:01.184: INFO: Pod "downwardapi-volume-769868e3-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038321878s
Jan  7 12:16:03.199: INFO: Pod "downwardapi-volume-769868e3-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053113735s
Jan  7 12:16:05.622: INFO: Pod "downwardapi-volume-769868e3-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.476289813s
Jan  7 12:16:07.662: INFO: Pod "downwardapi-volume-769868e3-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.516905727s
Jan  7 12:16:09.702: INFO: Pod "downwardapi-volume-769868e3-3147-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.556929322s
STEP: Saw pod success
Jan  7 12:16:09.703: INFO: Pod "downwardapi-volume-769868e3-3147-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:16:10.383: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-769868e3-3147-11ea-8b51-0242ac110005 container client-container: 
STEP: delete the pod
Jan  7 12:16:10.772: INFO: Waiting for pod downwardapi-volume-769868e3-3147-11ea-8b51-0242ac110005 to disappear
Jan  7 12:16:10.877: INFO: Pod downwardapi-volume-769868e3-3147-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:16:10.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rp4rn" for this suite.
Jan  7 12:16:16.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:16:17.015: INFO: namespace: e2e-tests-downward-api-rp4rn, resource: bindings, ignored listing per whitelist
Jan  7 12:16:17.131: INFO: namespace e2e-tests-downward-api-rp4rn deletion completed in 6.23676557s

• [SLOW TEST:18.301 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:16:17.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  7 12:16:17.389: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jan  7 12:16:17.400: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-s5sts/daemonsets","resourceVersion":"17475068"},"items":null}

Jan  7 12:16:17.404: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-s5sts/pods","resourceVersion":"17475068"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:16:17.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-s5sts" for this suite.
Jan  7 12:16:23.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:16:23.646: INFO: namespace: e2e-tests-daemonsets-s5sts, resource: bindings, ignored listing per whitelist
Jan  7 12:16:23.653: INFO: namespace e2e-tests-daemonsets-s5sts deletion completed in 6.232792452s

S [SKIPPING] [6.521 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jan  7 12:16:17.389: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:16:23.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  7 12:16:23.835: INFO: Waiting up to 5m0s for pod "pod-855e3a4b-3147-11ea-8b51-0242ac110005" in namespace "e2e-tests-emptydir-56lhj" to be "success or failure"
Jan  7 12:16:23.858: INFO: Pod "pod-855e3a4b-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.114823ms
Jan  7 12:16:26.185: INFO: Pod "pod-855e3a4b-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348966661s
Jan  7 12:16:28.246: INFO: Pod "pod-855e3a4b-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.410678664s
Jan  7 12:16:30.278: INFO: Pod "pod-855e3a4b-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442760845s
Jan  7 12:16:32.304: INFO: Pod "pod-855e3a4b-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.468503156s
Jan  7 12:16:34.337: INFO: Pod "pod-855e3a4b-3147-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.500976687s
STEP: Saw pod success
Jan  7 12:16:34.337: INFO: Pod "pod-855e3a4b-3147-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:16:34.411: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-855e3a4b-3147-11ea-8b51-0242ac110005 container test-container: 
STEP: delete the pod
Jan  7 12:16:35.928: INFO: Waiting for pod pod-855e3a4b-3147-11ea-8b51-0242ac110005 to disappear
Jan  7 12:16:36.495: INFO: Pod pod-855e3a4b-3147-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:16:36.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-56lhj" for this suite.
Jan  7 12:16:42.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:16:42.814: INFO: namespace: e2e-tests-emptydir-56lhj, resource: bindings, ignored listing per whitelist
Jan  7 12:16:42.833: INFO: namespace e2e-tests-emptydir-56lhj deletion completed in 6.313612851s

• [SLOW TEST:19.179 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:16:42.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-zlxh
STEP: Creating a pod to test atomic-volume-subpath
Jan  7 12:16:43.067: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-zlxh" in namespace "e2e-tests-subpath-pm8lv" to be "success or failure"
Jan  7 12:16:43.277: INFO: Pod "pod-subpath-test-projected-zlxh": Phase="Pending", Reason="", readiness=false. Elapsed: 209.699043ms
Jan  7 12:16:45.296: INFO: Pod "pod-subpath-test-projected-zlxh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229296401s
Jan  7 12:16:47.319: INFO: Pod "pod-subpath-test-projected-zlxh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.251620156s
Jan  7 12:16:49.904: INFO: Pod "pod-subpath-test-projected-zlxh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.836637916s
Jan  7 12:16:51.936: INFO: Pod "pod-subpath-test-projected-zlxh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.868927286s
Jan  7 12:16:53.959: INFO: Pod "pod-subpath-test-projected-zlxh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.89180097s
Jan  7 12:16:55.980: INFO: Pod "pod-subpath-test-projected-zlxh": Phase="Pending", Reason="", readiness=false. Elapsed: 12.91300744s
Jan  7 12:16:58.153: INFO: Pod "pod-subpath-test-projected-zlxh": Phase="Pending", Reason="", readiness=false. Elapsed: 15.085526222s
Jan  7 12:17:00.193: INFO: Pod "pod-subpath-test-projected-zlxh": Phase="Pending", Reason="", readiness=false. Elapsed: 17.126470892s
Jan  7 12:17:02.209: INFO: Pod "pod-subpath-test-projected-zlxh": Phase="Running", Reason="", readiness=false. Elapsed: 19.141890903s
Jan  7 12:17:04.226: INFO: Pod "pod-subpath-test-projected-zlxh": Phase="Running", Reason="", readiness=false. Elapsed: 21.159064621s
Jan  7 12:17:06.245: INFO: Pod "pod-subpath-test-projected-zlxh": Phase="Running", Reason="", readiness=false. Elapsed: 23.177872975s
Jan  7 12:17:08.271: INFO: Pod "pod-subpath-test-projected-zlxh": Phase="Running", Reason="", readiness=false. Elapsed: 25.203935245s
Jan  7 12:17:10.287: INFO: Pod "pod-subpath-test-projected-zlxh": Phase="Running", Reason="", readiness=false. Elapsed: 27.220171577s
Jan  7 12:17:12.301: INFO: Pod "pod-subpath-test-projected-zlxh": Phase="Running", Reason="", readiness=false. Elapsed: 29.233643152s
Jan  7 12:17:14.312: INFO: Pod "pod-subpath-test-projected-zlxh": Phase="Running", Reason="", readiness=false. Elapsed: 31.244915612s
Jan  7 12:17:16.358: INFO: Pod "pod-subpath-test-projected-zlxh": Phase="Running", Reason="", readiness=false. Elapsed: 33.290542941s
Jan  7 12:17:18.382: INFO: Pod "pod-subpath-test-projected-zlxh": Phase="Running", Reason="", readiness=false. Elapsed: 35.314720317s
Jan  7 12:17:20.515: INFO: Pod "pod-subpath-test-projected-zlxh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.447709968s
STEP: Saw pod success
Jan  7 12:17:20.515: INFO: Pod "pod-subpath-test-projected-zlxh" satisfied condition "success or failure"
Jan  7 12:17:20.578: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-zlxh container test-container-subpath-projected-zlxh: 
STEP: delete the pod
Jan  7 12:17:21.093: INFO: Waiting for pod pod-subpath-test-projected-zlxh to disappear
Jan  7 12:17:21.114: INFO: Pod pod-subpath-test-projected-zlxh no longer exists
STEP: Deleting pod pod-subpath-test-projected-zlxh
Jan  7 12:17:21.114: INFO: Deleting pod "pod-subpath-test-projected-zlxh" in namespace "e2e-tests-subpath-pm8lv"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:17:21.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-pm8lv" for this suite.
Jan  7 12:17:27.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:17:27.279: INFO: namespace: e2e-tests-subpath-pm8lv, resource: bindings, ignored listing per whitelist
Jan  7 12:17:27.350: INFO: namespace e2e-tests-subpath-pm8lv deletion completed in 6.218293678s

• [SLOW TEST:44.517 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:17:27.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-gdkc
STEP: Creating a pod to test atomic-volume-subpath
Jan  7 12:17:27.548: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-gdkc" in namespace "e2e-tests-subpath-gnptc" to be "success or failure"
Jan  7 12:17:27.560: INFO: Pod "pod-subpath-test-secret-gdkc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.377569ms
Jan  7 12:17:29.712: INFO: Pod "pod-subpath-test-secret-gdkc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164290345s
Jan  7 12:17:31.733: INFO: Pod "pod-subpath-test-secret-gdkc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185512594s
Jan  7 12:17:34.088: INFO: Pod "pod-subpath-test-secret-gdkc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.539967171s
Jan  7 12:17:36.955: INFO: Pod "pod-subpath-test-secret-gdkc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.407556472s
Jan  7 12:17:38.983: INFO: Pod "pod-subpath-test-secret-gdkc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.434846391s
Jan  7 12:17:41.228: INFO: Pod "pod-subpath-test-secret-gdkc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.680145254s
Jan  7 12:17:43.259: INFO: Pod "pod-subpath-test-secret-gdkc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.711557903s
Jan  7 12:17:45.281: INFO: Pod "pod-subpath-test-secret-gdkc": Phase="Running", Reason="", readiness=false. Elapsed: 17.732990151s
Jan  7 12:17:47.302: INFO: Pod "pod-subpath-test-secret-gdkc": Phase="Running", Reason="", readiness=false. Elapsed: 19.754081653s
Jan  7 12:17:49.328: INFO: Pod "pod-subpath-test-secret-gdkc": Phase="Running", Reason="", readiness=false. Elapsed: 21.780633806s
Jan  7 12:17:51.347: INFO: Pod "pod-subpath-test-secret-gdkc": Phase="Running", Reason="", readiness=false. Elapsed: 23.799583496s
Jan  7 12:17:53.368: INFO: Pod "pod-subpath-test-secret-gdkc": Phase="Running", Reason="", readiness=false. Elapsed: 25.819792219s
Jan  7 12:17:55.387: INFO: Pod "pod-subpath-test-secret-gdkc": Phase="Running", Reason="", readiness=false. Elapsed: 27.839425124s
Jan  7 12:17:57.402: INFO: Pod "pod-subpath-test-secret-gdkc": Phase="Running", Reason="", readiness=false. Elapsed: 29.853870491s
Jan  7 12:17:59.484: INFO: Pod "pod-subpath-test-secret-gdkc": Phase="Running", Reason="", readiness=false. Elapsed: 31.93638526s
Jan  7 12:18:01.504: INFO: Pod "pod-subpath-test-secret-gdkc": Phase="Running", Reason="", readiness=false. Elapsed: 33.956376896s
Jan  7 12:18:03.521: INFO: Pod "pod-subpath-test-secret-gdkc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.972839314s
STEP: Saw pod success
Jan  7 12:18:03.521: INFO: Pod "pod-subpath-test-secret-gdkc" satisfied condition "success or failure"
Jan  7 12:18:03.526: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-gdkc container test-container-subpath-secret-gdkc: 
STEP: delete the pod
Jan  7 12:18:03.653: INFO: Waiting for pod pod-subpath-test-secret-gdkc to disappear
Jan  7 12:18:03.663: INFO: Pod pod-subpath-test-secret-gdkc no longer exists
STEP: Deleting pod pod-subpath-test-secret-gdkc
Jan  7 12:18:03.663: INFO: Deleting pod "pod-subpath-test-secret-gdkc" in namespace "e2e-tests-subpath-gnptc"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:18:03.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-gnptc" for this suite.
Jan  7 12:18:09.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:18:09.743: INFO: namespace: e2e-tests-subpath-gnptc, resource: bindings, ignored listing per whitelist
Jan  7 12:18:09.952: INFO: namespace e2e-tests-subpath-gnptc deletion completed in 6.274233947s

• [SLOW TEST:42.601 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:18:09.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  7 12:18:10.199: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4bbb9c9-3147-11ea-8b51-0242ac110005" in namespace "e2e-tests-downward-api-7cjkb" to be "success or failure"
Jan  7 12:18:10.210: INFO: Pod "downwardapi-volume-c4bbb9c9-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.389311ms
Jan  7 12:18:12.222: INFO: Pod "downwardapi-volume-c4bbb9c9-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022787197s
Jan  7 12:18:14.264: INFO: Pod "downwardapi-volume-c4bbb9c9-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064835061s
Jan  7 12:18:16.579: INFO: Pod "downwardapi-volume-c4bbb9c9-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.380093292s
Jan  7 12:18:18.646: INFO: Pod "downwardapi-volume-c4bbb9c9-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.446556211s
Jan  7 12:18:20.670: INFO: Pod "downwardapi-volume-c4bbb9c9-3147-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.470461471s
Jan  7 12:18:22.684: INFO: Pod "downwardapi-volume-c4bbb9c9-3147-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.484612358s
STEP: Saw pod success
Jan  7 12:18:22.684: INFO: Pod "downwardapi-volume-c4bbb9c9-3147-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:18:22.698: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c4bbb9c9-3147-11ea-8b51-0242ac110005 container client-container: 
STEP: delete the pod
Jan  7 12:18:23.812: INFO: Waiting for pod downwardapi-volume-c4bbb9c9-3147-11ea-8b51-0242ac110005 to disappear
Jan  7 12:18:23.879: INFO: Pod downwardapi-volume-c4bbb9c9-3147-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:18:23.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-7cjkb" for this suite.
Jan  7 12:18:29.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:18:30.068: INFO: namespace: e2e-tests-downward-api-7cjkb, resource: bindings, ignored listing per whitelist
Jan  7 12:18:30.167: INFO: namespace e2e-tests-downward-api-7cjkb deletion completed in 6.262981185s

• [SLOW TEST:20.215 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:18:30.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan  7 12:18:30.374: INFO: namespace e2e-tests-kubectl-x2vrs
Jan  7 12:18:30.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-x2vrs'
Jan  7 12:18:30.976: INFO: stderr: ""
Jan  7 12:18:30.976: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  7 12:18:32.610: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:18:32.610: INFO: Found 0 / 1
Jan  7 12:18:32.996: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:18:32.996: INFO: Found 0 / 1
Jan  7 12:18:34.047: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:18:34.047: INFO: Found 0 / 1
Jan  7 12:18:35.015: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:18:35.016: INFO: Found 0 / 1
Jan  7 12:18:36.822: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:18:36.823: INFO: Found 0 / 1
Jan  7 12:18:37.300: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:18:37.300: INFO: Found 0 / 1
Jan  7 12:18:38.902: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:18:38.902: INFO: Found 0 / 1
Jan  7 12:18:39.070: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:18:39.071: INFO: Found 0 / 1
Jan  7 12:18:40.008: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:18:40.009: INFO: Found 0 / 1
Jan  7 12:18:40.995: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:18:40.995: INFO: Found 0 / 1
Jan  7 12:18:42.203: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:18:42.204: INFO: Found 1 / 1
Jan  7 12:18:42.204: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  7 12:18:42.220: INFO: Selector matched 1 pods for map[app:redis]
Jan  7 12:18:42.220: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  7 12:18:42.220: INFO: wait on redis-master startup in e2e-tests-kubectl-x2vrs 
Jan  7 12:18:42.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wpth9 redis-master --namespace=e2e-tests-kubectl-x2vrs'
Jan  7 12:18:42.421: INFO: stderr: ""
Jan  7 12:18:42.421: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 07 Jan 12:18:40.056 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Jan 12:18:40.056 # Server started, Redis version 3.2.12\n1:M 07 Jan 12:18:40.056 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Jan 12:18:40.056 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan  7 12:18:42.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-x2vrs'
Jan  7 12:18:42.653: INFO: stderr: ""
Jan  7 12:18:42.653: INFO: stdout: "service/rm2 exposed\n"
Jan  7 12:18:42.659: INFO: Service rm2 in namespace e2e-tests-kubectl-x2vrs found.
STEP: exposing service
Jan  7 12:18:44.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-x2vrs'
Jan  7 12:18:45.005: INFO: stderr: ""
Jan  7 12:18:45.005: INFO: stdout: "service/rm3 exposed\n"
Jan  7 12:18:45.042: INFO: Service rm3 in namespace e2e-tests-kubectl-x2vrs found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:18:47.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-x2vrs" for this suite.
Jan  7 12:19:13.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:19:13.277: INFO: namespace: e2e-tests-kubectl-x2vrs, resource: bindings, ignored listing per whitelist
Jan  7 12:19:13.310: INFO: namespace e2e-tests-kubectl-x2vrs deletion completed in 26.228912183s

• [SLOW TEST:43.142 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:19:13.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  7 12:19:13.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-fr2s4'
Jan  7 12:19:13.654: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  7 12:19:13.655: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jan  7 12:19:17.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-fr2s4'
Jan  7 12:19:18.060: INFO: stderr: ""
Jan  7 12:19:18.060: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:19:18.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fr2s4" for this suite.
Jan  7 12:19:24.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:19:24.178: INFO: namespace: e2e-tests-kubectl-fr2s4, resource: bindings, ignored listing per whitelist
Jan  7 12:19:24.218: INFO: namespace e2e-tests-kubectl-fr2s4 deletion completed in 6.139146054s

• [SLOW TEST:10.907 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:19:24.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  7 12:19:34.984: INFO: Successfully updated pod "annotationupdatef0f9caa4-3147-11ea-8b51-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:19:37.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-w548c" for this suite.
Jan  7 12:20:01.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:20:01.233: INFO: namespace: e2e-tests-projected-w548c, resource: bindings, ignored listing per whitelist
Jan  7 12:20:01.314: INFO: namespace e2e-tests-projected-w548c deletion completed in 24.217838698s

• [SLOW TEST:37.096 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:20:01.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  7 12:20:01.452: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0714616c-3148-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-zvm88" to be "success or failure"
Jan  7 12:20:01.460: INFO: Pod "downwardapi-volume-0714616c-3148-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.962872ms
Jan  7 12:20:03.635: INFO: Pod "downwardapi-volume-0714616c-3148-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182778572s
Jan  7 12:20:05.648: INFO: Pod "downwardapi-volume-0714616c-3148-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196249154s
Jan  7 12:20:07.860: INFO: Pod "downwardapi-volume-0714616c-3148-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.408241186s
Jan  7 12:20:09.875: INFO: Pod "downwardapi-volume-0714616c-3148-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.423077556s
Jan  7 12:20:11.947: INFO: Pod "downwardapi-volume-0714616c-3148-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.495767685s
STEP: Saw pod success
Jan  7 12:20:11.948: INFO: Pod "downwardapi-volume-0714616c-3148-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:20:11.954: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0714616c-3148-11ea-8b51-0242ac110005 container client-container: 
STEP: delete the pod
Jan  7 12:20:12.652: INFO: Waiting for pod downwardapi-volume-0714616c-3148-11ea-8b51-0242ac110005 to disappear
Jan  7 12:20:13.066: INFO: Pod downwardapi-volume-0714616c-3148-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:20:13.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zvm88" for this suite.
Jan  7 12:20:19.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:20:19.603: INFO: namespace: e2e-tests-projected-zvm88, resource: bindings, ignored listing per whitelist
Jan  7 12:20:19.704: INFO: namespace e2e-tests-projected-zvm88 deletion completed in 6.197789947s

• [SLOW TEST:18.389 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:20:19.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan  7 12:20:21.052: INFO: Pod name wrapped-volume-race-12bcbfca-3148-11ea-8b51-0242ac110005: Found 0 pods out of 5
Jan  7 12:20:26.088: INFO: Pod name wrapped-volume-race-12bcbfca-3148-11ea-8b51-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-12bcbfca-3148-11ea-8b51-0242ac110005 in namespace e2e-tests-emptydir-wrapper-bhd4q, will wait for the garbage collector to delete the pods
Jan  7 12:22:40.265: INFO: Deleting ReplicationController wrapped-volume-race-12bcbfca-3148-11ea-8b51-0242ac110005 took: 33.681843ms
Jan  7 12:22:40.466: INFO: Terminating ReplicationController wrapped-volume-race-12bcbfca-3148-11ea-8b51-0242ac110005 pods took: 201.477268ms
STEP: Creating RC which spawns configmap-volume pods
Jan  7 12:23:32.803: INFO: Pod name wrapped-volume-race-84fb255c-3148-11ea-8b51-0242ac110005: Found 0 pods out of 5
Jan  7 12:23:37.833: INFO: Pod name wrapped-volume-race-84fb255c-3148-11ea-8b51-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-84fb255c-3148-11ea-8b51-0242ac110005 in namespace e2e-tests-emptydir-wrapper-bhd4q, will wait for the garbage collector to delete the pods
Jan  7 12:25:32.024: INFO: Deleting ReplicationController wrapped-volume-race-84fb255c-3148-11ea-8b51-0242ac110005 took: 25.636412ms
Jan  7 12:25:32.425: INFO: Terminating ReplicationController wrapped-volume-race-84fb255c-3148-11ea-8b51-0242ac110005 pods took: 401.450227ms
STEP: Creating RC which spawns configmap-volume pods
Jan  7 12:26:23.392: INFO: Pod name wrapped-volume-race-eaa46137-3148-11ea-8b51-0242ac110005: Found 0 pods out of 5
Jan  7 12:26:28.417: INFO: Pod name wrapped-volume-race-eaa46137-3148-11ea-8b51-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-eaa46137-3148-11ea-8b51-0242ac110005 in namespace e2e-tests-emptydir-wrapper-bhd4q, will wait for the garbage collector to delete the pods
Jan  7 12:28:42.695: INFO: Deleting ReplicationController wrapped-volume-race-eaa46137-3148-11ea-8b51-0242ac110005 took: 37.655564ms
Jan  7 12:28:43.096: INFO: Terminating ReplicationController wrapped-volume-race-eaa46137-3148-11ea-8b51-0242ac110005 pods took: 400.672308ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:29:34.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-bhd4q" for this suite.
Jan  7 12:29:44.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:29:44.694: INFO: namespace: e2e-tests-emptydir-wrapper-bhd4q, resource: bindings, ignored listing per whitelist
Jan  7 12:29:44.829: INFO: namespace e2e-tests-emptydir-wrapper-bhd4q deletion completed in 10.297655101s

• [SLOW TEST:565.126 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:29:44.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  7 12:29:45.179: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  7 12:29:45.234: INFO: Waiting for terminating namespaces to be deleted...
Jan  7 12:29:45.242: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan  7 12:29:45.258: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  7 12:29:45.258: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  7 12:29:45.258: INFO: 	Container coredns ready: true, restart count 0
Jan  7 12:29:45.258: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan  7 12:29:45.258: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  7 12:29:45.258: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  7 12:29:45.258: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan  7 12:29:45.258: INFO: 	Container weave ready: true, restart count 0
Jan  7 12:29:45.258: INFO: 	Container weave-npc ready: true, restart count 0
Jan  7 12:29:45.258: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  7 12:29:45.258: INFO: 	Container coredns ready: true, restart count 0
Jan  7 12:29:45.258: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  7 12:29:45.258: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e79a9b161fae23], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:29:46.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-h4v6h" for this suite.
Jan  7 12:29:52.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:29:52.446: INFO: namespace: e2e-tests-sched-pred-h4v6h, resource: bindings, ignored listing per whitelist
Jan  7 12:29:52.638: INFO: namespace e2e-tests-sched-pred-h4v6h deletion completed in 6.319695505s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.808 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:29:52.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  7 12:29:52.852: INFO: Waiting up to 5m0s for pod "pod-6793603a-3149-11ea-8b51-0242ac110005" in namespace "e2e-tests-emptydir-s76cw" to be "success or failure"
Jan  7 12:29:52.883: INFO: Pod "pod-6793603a-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.627841ms
Jan  7 12:29:55.393: INFO: Pod "pod-6793603a-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.540373489s
Jan  7 12:29:58.933: INFO: Pod "pod-6793603a-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080516323s
Jan  7 12:30:00.971: INFO: Pod "pod-6793603a-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118480032s
Jan  7 12:30:03.005: INFO: Pod "pod-6793603a-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.153176969s
Jan  7 12:30:05.168: INFO: Pod "pod-6793603a-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.315847347s
Jan  7 12:30:07.179: INFO: Pod "pod-6793603a-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.326396927s
Jan  7 12:30:09.669: INFO: Pod "pod-6793603a-3149-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.817218009s
STEP: Saw pod success
Jan  7 12:30:09.670: INFO: Pod "pod-6793603a-3149-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:30:09.680: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-6793603a-3149-11ea-8b51-0242ac110005 container test-container: 
STEP: delete the pod
Jan  7 12:30:10.079: INFO: Waiting for pod pod-6793603a-3149-11ea-8b51-0242ac110005 to disappear
Jan  7 12:30:10.103: INFO: Pod pod-6793603a-3149-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:30:10.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-s76cw" for this suite.
Jan  7 12:30:16.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:30:16.203: INFO: namespace: e2e-tests-emptydir-s76cw, resource: bindings, ignored listing per whitelist
Jan  7 12:30:16.310: INFO: namespace e2e-tests-emptydir-s76cw deletion completed in 6.199820464s

• [SLOW TEST:23.671 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:30:16.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-75c7ddb1-3149-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  7 12:30:16.712: INFO: Waiting up to 5m0s for pod "pod-configmaps-75c9268b-3149-11ea-8b51-0242ac110005" in namespace "e2e-tests-configmap-jb7r9" to be "success or failure"
Jan  7 12:30:16.731: INFO: Pod "pod-configmaps-75c9268b-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.385701ms
Jan  7 12:30:18.749: INFO: Pod "pod-configmaps-75c9268b-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037124128s
Jan  7 12:30:20.766: INFO: Pod "pod-configmaps-75c9268b-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053560491s
Jan  7 12:30:22.792: INFO: Pod "pod-configmaps-75c9268b-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080426686s
Jan  7 12:30:24.931: INFO: Pod "pod-configmaps-75c9268b-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.219032579s
Jan  7 12:30:26.951: INFO: Pod "pod-configmaps-75c9268b-3149-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.23937226s
STEP: Saw pod success
Jan  7 12:30:26.952: INFO: Pod "pod-configmaps-75c9268b-3149-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:30:26.957: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-75c9268b-3149-11ea-8b51-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  7 12:30:27.048: INFO: Waiting for pod pod-configmaps-75c9268b-3149-11ea-8b51-0242ac110005 to disappear
Jan  7 12:30:27.063: INFO: Pod pod-configmaps-75c9268b-3149-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:30:27.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jb7r9" for this suite.
Jan  7 12:30:33.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:30:33.244: INFO: namespace: e2e-tests-configmap-jb7r9, resource: bindings, ignored listing per whitelist
Jan  7 12:30:33.316: INFO: namespace e2e-tests-configmap-jb7r9 deletion completed in 6.242068052s

• [SLOW TEST:17.005 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:30:33.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-7fd71a9c-3149-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  7 12:30:33.577: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7fd82da2-3149-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-vl5sb" to be "success or failure"
Jan  7 12:30:33.598: INFO: Pod "pod-projected-configmaps-7fd82da2-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.321629ms
Jan  7 12:30:35.632: INFO: Pod "pod-projected-configmaps-7fd82da2-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054817318s
Jan  7 12:30:37.649: INFO: Pod "pod-projected-configmaps-7fd82da2-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071381513s
Jan  7 12:30:39.763: INFO: Pod "pod-projected-configmaps-7fd82da2-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.185430989s
Jan  7 12:30:41.873: INFO: Pod "pod-projected-configmaps-7fd82da2-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.295897957s
Jan  7 12:30:44.180: INFO: Pod "pod-projected-configmaps-7fd82da2-3149-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.602266308s
STEP: Saw pod success
Jan  7 12:30:44.180: INFO: Pod "pod-projected-configmaps-7fd82da2-3149-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:30:44.188: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-7fd82da2-3149-11ea-8b51-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  7 12:30:44.420: INFO: Waiting for pod pod-projected-configmaps-7fd82da2-3149-11ea-8b51-0242ac110005 to disappear
Jan  7 12:30:44.460: INFO: Pod pod-projected-configmaps-7fd82da2-3149-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:30:44.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vl5sb" for this suite.
Jan  7 12:30:50.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:30:50.901: INFO: namespace: e2e-tests-projected-vl5sb, resource: bindings, ignored listing per whitelist
Jan  7 12:30:50.959: INFO: namespace e2e-tests-projected-vl5sb deletion completed in 6.488270803s

• [SLOW TEST:17.644 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:30:50.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-bnwfc
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-bnwfc
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-bnwfc
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-bnwfc
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-bnwfc
Jan  7 12:31:05.978: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-bnwfc, name: ss-0, uid: 92655791-3149-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Jan  7 12:31:06.006: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-bnwfc, name: ss-0, uid: 92655791-3149-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan  7 12:31:06.030: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-bnwfc, name: ss-0, uid: 92655791-3149-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan  7 12:31:06.037: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-bnwfc
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-bnwfc
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-bnwfc and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  7 12:31:19.654: INFO: Deleting all statefulset in ns e2e-tests-statefulset-bnwfc
Jan  7 12:31:19.661: INFO: Scaling statefulset ss to 0
Jan  7 12:31:39.722: INFO: Waiting for statefulset status.replicas updated to 0
Jan  7 12:31:39.729: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:31:39.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-bnwfc" for this suite.
Jan  7 12:31:47.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:31:48.021: INFO: namespace: e2e-tests-statefulset-bnwfc, resource: bindings, ignored listing per whitelist
Jan  7 12:31:48.066: INFO: namespace e2e-tests-statefulset-bnwfc deletion completed in 8.295588172s

• [SLOW TEST:57.107 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:31:48.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:32:48.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-9lcjg" for this suite.
Jan  7 12:33:12.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:33:12.583: INFO: namespace: e2e-tests-container-probe-9lcjg, resource: bindings, ignored listing per whitelist
Jan  7 12:33:12.696: INFO: namespace e2e-tests-container-probe-9lcjg deletion completed in 24.265109717s

• [SLOW TEST:84.630 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:33:12.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  7 12:33:12.904: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:33:36.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-wzssz" for this suite.
Jan  7 12:34:00.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:34:01.016: INFO: namespace: e2e-tests-init-container-wzssz, resource: bindings, ignored listing per whitelist
Jan  7 12:34:01.084: INFO: namespace e2e-tests-init-container-wzssz deletion completed in 24.248598687s

• [SLOW TEST:48.387 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:34:01.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-fb9fd778-3149-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  7 12:34:01.389: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fbb6d8c4-3149-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-psc7n" to be "success or failure"
Jan  7 12:34:01.413: INFO: Pod "pod-projected-configmaps-fbb6d8c4-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.136071ms
Jan  7 12:34:03.983: INFO: Pod "pod-projected-configmaps-fbb6d8c4-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.593459453s
Jan  7 12:34:05.994: INFO: Pod "pod-projected-configmaps-fbb6d8c4-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.604504933s
Jan  7 12:34:08.013: INFO: Pod "pod-projected-configmaps-fbb6d8c4-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.623569956s
Jan  7 12:34:10.066: INFO: Pod "pod-projected-configmaps-fbb6d8c4-3149-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.675880329s
Jan  7 12:34:12.087: INFO: Pod "pod-projected-configmaps-fbb6d8c4-3149-11ea-8b51-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.697106193s
Jan  7 12:34:14.103: INFO: Pod "pod-projected-configmaps-fbb6d8c4-3149-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.713617651s
STEP: Saw pod success
Jan  7 12:34:14.103: INFO: Pod "pod-projected-configmaps-fbb6d8c4-3149-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:34:14.109: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-fbb6d8c4-3149-11ea-8b51-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  7 12:34:14.835: INFO: Waiting for pod pod-projected-configmaps-fbb6d8c4-3149-11ea-8b51-0242ac110005 to disappear
Jan  7 12:34:14.845: INFO: Pod pod-projected-configmaps-fbb6d8c4-3149-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:34:14.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-psc7n" for this suite.
Jan  7 12:34:20.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:34:20.972: INFO: namespace: e2e-tests-projected-psc7n, resource: bindings, ignored listing per whitelist
Jan  7 12:34:21.074: INFO: namespace e2e-tests-projected-psc7n deletion completed in 6.222413059s

• [SLOW TEST:19.990 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:34:21.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jan  7 12:34:31.419: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
Jan  7 12:36:03.469: INFO: Unexpected error occurred: timed out waiting for the condition
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
STEP: Collecting events from namespace "e2e-tests-namespaces-t9qs9".
STEP: Found 0 events.
Jan  7 12:36:03.503: INFO: POD                                                 NODE                        PHASE    GRACE  CONDITIONS
Jan  7 12:36:03.503: INFO: test-pod-uninitialized                              hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:34:31 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:34:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:34:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 12:34:31 +0000 UTC  }]
Jan  7 12:36:03.503: INFO: coredns-54ff9cd656-79kxx                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Jan  7 12:36:03.503: INFO: coredns-54ff9cd656-bmkk4                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Jan  7 12:36:03.503: INFO: etcd-hunter-server-hu5at5svl7ps                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan  7 12:36:03.503: INFO: kube-apiserver-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan  7 12:36:03.503: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan  7 12:36:03.503: INFO: kube-proxy-bqnnz                                    hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:22 +0000 UTC  }]
Jan  7 12:36:03.503: INFO: kube-scheduler-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan  7 12:36:03.503: INFO: weave-net-tqwf2                                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 12:18:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 12:18:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  }]
Jan  7 12:36:03.503: INFO: 
Jan  7 12:36:03.511: INFO: 
Logging node info for node hunter-server-hu5at5svl7ps
Jan  7 12:36:03.519: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-server-hu5at5svl7ps,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-server-hu5at5svl7ps,UID:79f3887d-b692-11e9-a994-fa163e34d433,ResourceVersion:17477543,Generation:0,CreationTimestamp:2019-08-04 08:33:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-server-hu5at5svl7ps,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:33:41 +0000 UTC 2019-08-04 08:33:41 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-01-07 12:35:55 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-07 12:35:55 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-07 12:35:55 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-07 12:35:55 +0000 UTC 2019-08-04 08:33:44 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.1.240} {Hostname hunter-server-hu5at5svl7ps}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09742db8afaa4010be44cec974ef8dd2,SystemUUID:09742DB8-AFAA-4010-BE44-CEC974EF8DD2,BootID:e5092afb-2b29-4458-9662-9eee6c0a1f90,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.13.8,KubeProxyVersion:v1.13.8,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 k8s.gcr.io/etcd:3.2.24] 219655340} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[k8s.gcr.io/kube-apiserver@sha256:782fb3e5e34a3025e5c2fc92d5a73fc5eb5223fbd1760a551f2d02e1b484c899 k8s.gcr.io/kube-apiserver:v1.13.8] 181093118} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[k8s.gcr.io/kube-controller-manager@sha256:46889a90fff5324ad813c1024d0b7713a5529117570e3611657a0acfb58c8f43 k8s.gcr.io/kube-controller-manager:v1.13.8] 146353566} {[nginx@sha256:662b1a542362596b094b0b3fa30a8528445b75aed9f2d009f72401a0f8870c1f nginx@sha256:9916837e6b165e967e2beb5a586b1c980084d08eb3b3d7f79178a0c79426d880] 126346569} {[nginx@sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2 nginx:latest] 126323778} {[nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566 nginx@sha256:73113849b52b099e447eabb83a2722635562edc798f5b86bdf853faa0a49ec70] 126323486} {[nginx@sha256:922c815aa4df050d4df476e92daed4231f466acc8ee90e0e774951b0fd7195a4] 126215561} {[nginx@sha256:77ebc94e0cec30b20f9056bac1066b09fbdc049401b71850922c63fc0cc1762e] 125993293} {[nginx@sha256:9688d0dae8812dd2437947b756393eb0779487e361aa2ffbc3a529dca61f102c] 125976833} {[nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1] 125972845} {[nginx@sha256:1a8935aae56694cee3090d39df51b4e7fcbfe6877df24a4c5c0782dfeccc97e1 nginx@sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9 nginx@sha256:a8517b1d89209c88eeb48709bc06d706c261062813720a352a8e4f8d96635d9d] 125958368} {[nginx@sha256:5411d8897c3da841a1f45f895b43ad4526eb62d3393c3287124a56be49962d41] 125850912} {[nginx@sha256:eb3320e2f9ca409b7c0aa71aea3cf7ce7d018f03a372564dbdb023646958770b] 125850346} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:c27502f9ab958f59f95bda6a4ffd266e3ca42a75aae641db4aac7e93dd383b6e k8s.gcr.io/kube-proxy:v1.13.8] 80245404} {[k8s.gcr.io/kube-scheduler@sha256:fdcc2d056ba5937f66301b9071b2c322fad53254e6ddf277592d99f267e5745f k8s.gcr.io/kube-scheduler:v1.13.8] 79601406} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51 k8s.gcr.io/coredns:1.2.6] 40017418} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Jan  7 12:36:03.520: INFO: 
Logging kubelet events for node hunter-server-hu5at5svl7ps
Jan  7 12:36:03.525: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps
Jan  7 12:36:03.564: INFO: weave-net-tqwf2 started at 2019-08-04 08:33:23 +0000 UTC (0+2 container statuses recorded)
Jan  7 12:36:03.564: INFO: 	Container weave ready: true, restart count 0
Jan  7 12:36:03.564: INFO: 	Container weave-npc ready: true, restart count 0
Jan  7 12:36:03.564: INFO: test-pod-uninitialized started at 2020-01-07 12:34:31 +0000 UTC (0+1 container statuses recorded)
Jan  7 12:36:03.564: INFO: 	Container nginx ready: true, restart count 0
Jan  7 12:36:03.564: INFO: coredns-54ff9cd656-bmkk4 started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Jan  7 12:36:03.564: INFO: 	Container coredns ready: true, restart count 0
Jan  7 12:36:03.564: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan  7 12:36:03.564: INFO: kube-apiserver-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan  7 12:36:03.564: INFO: kube-scheduler-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan  7 12:36:03.564: INFO: coredns-54ff9cd656-79kxx started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Jan  7 12:36:03.564: INFO: 	Container coredns ready: true, restart count 0
Jan  7 12:36:03.564: INFO: kube-proxy-bqnnz started at 2019-08-04 08:33:23 +0000 UTC (0+1 container statuses recorded)
Jan  7 12:36:03.564: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  7 12:36:03.564: INFO: etcd-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
W0107 12:36:03.573855       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  7 12:36:03.696: INFO: 
Latency metrics for node hunter-server-hu5at5svl7ps
Jan  7 12:36:03.696: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:1m20.970111s}
Jan  7 12:36:03.696: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:42.935063s}
Jan  7 12:36:03.696: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:33.939179s}
Jan  7 12:36:03.697: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:12.022408s}
Jan  7 12:36:03.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-t9qs9" for this suite.
Jan  7 12:36:09.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:36:09.948: INFO: namespace: e2e-tests-namespaces-t9qs9, resource: bindings, ignored listing per whitelist
Jan  7 12:36:09.958: INFO: namespace e2e-tests-namespaces-t9qs9 deletion completed in 6.250525972s
STEP: Destroying namespace "e2e-tests-nsdeletetest-kfdjs" for this suite.
Jan  7 12:36:09.963: INFO: Couldn't delete ns: "e2e-tests-nsdeletetest-kfdjs": Operation cannot be fulfilled on namespaces "e2e-tests-nsdeletetest-kfdjs": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system. (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:""}, Status:"Failure", Message:"Operation cannot be fulfilled on namespaces \"e2e-tests-nsdeletetest-kfdjs\": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.", Reason:"Conflict", Details:(*v1.StatusDetails)(0xc0019a2de0), Code:409}})

• Failure [108.890 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Expected error:
      <*errors.errorString | 0xc0000a18b0>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  not to have occurred

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161
------------------------------
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:36:09.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  7 12:36:10.292: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 27.785185ms)
Jan  7 12:36:10.307: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.728196ms)
Jan  7 12:36:10.317: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.990343ms)
Jan  7 12:36:10.323: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.628348ms)
Jan  7 12:36:10.329: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.900581ms)
Jan  7 12:36:10.406: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 76.266901ms)
Jan  7 12:36:10.415: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.731313ms)
Jan  7 12:36:10.423: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.729245ms)
Jan  7 12:36:10.431: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.985287ms)
Jan  7 12:36:10.443: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.464078ms)
Jan  7 12:36:10.451: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.423576ms)
Jan  7 12:36:10.461: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.450909ms)
Jan  7 12:36:10.498: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 37.074592ms)
Jan  7 12:36:10.580: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 82.141663ms)
Jan  7 12:36:10.609: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 28.4756ms)
Jan  7 12:36:10.629: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.658412ms)
Jan  7 12:36:10.644: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.149202ms)
Jan  7 12:36:10.654: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.678554ms)
Jan  7 12:36:10.660: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.685944ms)
Jan  7 12:36:10.664: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.38845ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:36:10.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-mgr4c" for this suite.
Jan  7 12:36:16.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:36:16.818: INFO: namespace: e2e-tests-proxy-mgr4c, resource: bindings, ignored listing per whitelist
Jan  7 12:36:16.828: INFO: namespace e2e-tests-proxy-mgr4c deletion completed in 6.158931876s

• [SLOW TEST:6.863 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:36:16.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0107 12:36:27.460852       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  7 12:36:27.461: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:36:27.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-rw8hs" for this suite.
Jan  7 12:36:34.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:36:34.212: INFO: namespace: e2e-tests-gc-rw8hs, resource: bindings, ignored listing per whitelist
Jan  7 12:36:34.228: INFO: namespace e2e-tests-gc-rw8hs deletion completed in 6.749168212s

• [SLOW TEST:17.400 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:36:34.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  7 12:36:34.604: INFO: Waiting up to 5m0s for pod "downwardapi-volume-56f7a658-314a-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-vwv7s" to be "success or failure"
Jan  7 12:36:34.625: INFO: Pod "downwardapi-volume-56f7a658-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.470595ms
Jan  7 12:36:36.859: INFO: Pod "downwardapi-volume-56f7a658-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255318822s
Jan  7 12:36:38.909: INFO: Pod "downwardapi-volume-56f7a658-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305284309s
Jan  7 12:36:41.202: INFO: Pod "downwardapi-volume-56f7a658-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.598175862s
Jan  7 12:36:43.226: INFO: Pod "downwardapi-volume-56f7a658-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.621464466s
Jan  7 12:36:45.236: INFO: Pod "downwardapi-volume-56f7a658-314a-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.632126916s
STEP: Saw pod success
Jan  7 12:36:45.236: INFO: Pod "downwardapi-volume-56f7a658-314a-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:36:45.242: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-56f7a658-314a-11ea-8b51-0242ac110005 container client-container: 
STEP: delete the pod
Jan  7 12:36:45.444: INFO: Waiting for pod downwardapi-volume-56f7a658-314a-11ea-8b51-0242ac110005 to disappear
Jan  7 12:36:45.459: INFO: Pod downwardapi-volume-56f7a658-314a-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:36:45.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vwv7s" for this suite.
Jan  7 12:36:52.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:36:52.136: INFO: namespace: e2e-tests-projected-vwv7s, resource: bindings, ignored listing per whitelist
Jan  7 12:36:52.248: INFO: namespace e2e-tests-projected-vwv7s deletion completed in 6.783139732s

• [SLOW TEST:18.020 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:36:52.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  7 12:36:52.369: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61a297f6-314a-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-zvc5g" to be "success or failure"
Jan  7 12:36:52.421: INFO: Pod "downwardapi-volume-61a297f6-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 51.738639ms
Jan  7 12:36:54.443: INFO: Pod "downwardapi-volume-61a297f6-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073558356s
Jan  7 12:36:56.461: INFO: Pod "downwardapi-volume-61a297f6-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091403881s
Jan  7 12:36:58.691: INFO: Pod "downwardapi-volume-61a297f6-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.321678761s
Jan  7 12:37:00.703: INFO: Pod "downwardapi-volume-61a297f6-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.333323782s
Jan  7 12:37:02.755: INFO: Pod "downwardapi-volume-61a297f6-314a-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.385839076s
STEP: Saw pod success
Jan  7 12:37:02.756: INFO: Pod "downwardapi-volume-61a297f6-314a-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:37:02.790: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-61a297f6-314a-11ea-8b51-0242ac110005 container client-container: 
STEP: delete the pod
Jan  7 12:37:03.128: INFO: Waiting for pod downwardapi-volume-61a297f6-314a-11ea-8b51-0242ac110005 to disappear
Jan  7 12:37:03.140: INFO: Pod downwardapi-volume-61a297f6-314a-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:37:03.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zvc5g" for this suite.
Jan  7 12:37:09.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:37:09.299: INFO: namespace: e2e-tests-projected-zvc5g, resource: bindings, ignored listing per whitelist
Jan  7 12:37:09.345: INFO: namespace e2e-tests-projected-zvc5g deletion completed in 6.192752685s

• [SLOW TEST:17.096 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:37:09.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  7 12:37:09.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-f6xpm'
Jan  7 12:37:11.613: INFO: stderr: ""
Jan  7 12:37:11.613: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jan  7 12:37:11.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-f6xpm'
Jan  7 12:37:19.579: INFO: stderr: ""
Jan  7 12:37:19.579: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:37:19.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-f6xpm" for this suite.
Jan  7 12:37:25.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:37:26.061: INFO: namespace: e2e-tests-kubectl-f6xpm, resource: bindings, ignored listing per whitelist
Jan  7 12:37:26.146: INFO: namespace e2e-tests-kubectl-f6xpm deletion completed in 6.520871558s

• [SLOW TEST:16.801 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:37:26.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-75e2f28a-314a-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  7 12:37:26.359: INFO: Waiting up to 5m0s for pod "pod-configmaps-75e46a90-314a-11ea-8b51-0242ac110005" in namespace "e2e-tests-configmap-9j5ql" to be "success or failure"
Jan  7 12:37:26.370: INFO: Pod "pod-configmaps-75e46a90-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.211997ms
Jan  7 12:37:28.394: INFO: Pod "pod-configmaps-75e46a90-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034388703s
Jan  7 12:37:30.410: INFO: Pod "pod-configmaps-75e46a90-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050800726s
Jan  7 12:37:32.724: INFO: Pod "pod-configmaps-75e46a90-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.365014639s
Jan  7 12:37:34.756: INFO: Pod "pod-configmaps-75e46a90-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.396314055s
Jan  7 12:37:36.791: INFO: Pod "pod-configmaps-75e46a90-314a-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.431260016s
STEP: Saw pod success
Jan  7 12:37:36.791: INFO: Pod "pod-configmaps-75e46a90-314a-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:37:36.799: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-75e46a90-314a-11ea-8b51-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  7 12:37:37.098: INFO: Waiting for pod pod-configmaps-75e46a90-314a-11ea-8b51-0242ac110005 to disappear
Jan  7 12:37:37.110: INFO: Pod pod-configmaps-75e46a90-314a-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:37:37.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9j5ql" for this suite.
Jan  7 12:37:43.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:37:43.641: INFO: namespace: e2e-tests-configmap-9j5ql, resource: bindings, ignored listing per whitelist
Jan  7 12:37:43.669: INFO: namespace e2e-tests-configmap-9j5ql deletion completed in 6.547244177s

• [SLOW TEST:17.523 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:37:43.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  7 12:38:14.111: INFO: Container started at 2020-01-07 12:37:51 +0000 UTC, pod became ready at 2020-01-07 12:38:13 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:38:14.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-fz9r7" for this suite.
Jan  7 12:38:38.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:38:38.249: INFO: namespace: e2e-tests-container-probe-fz9r7, resource: bindings, ignored listing per whitelist
Jan  7 12:38:38.331: INFO: namespace e2e-tests-container-probe-fz9r7 deletion completed in 24.207664414s

• [SLOW TEST:54.661 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:38:38.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  7 12:38:38.710: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0fd6411-314a-11ea-8b51-0242ac110005" in namespace "e2e-tests-downward-api-vprgr" to be "success or failure"
Jan  7 12:38:38.814: INFO: Pod "downwardapi-volume-a0fd6411-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 103.852453ms
Jan  7 12:38:41.130: INFO: Pod "downwardapi-volume-a0fd6411-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.419667858s
Jan  7 12:38:43.157: INFO: Pod "downwardapi-volume-a0fd6411-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.447440818s
Jan  7 12:38:45.340: INFO: Pod "downwardapi-volume-a0fd6411-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.630129799s
Jan  7 12:38:47.369: INFO: Pod "downwardapi-volume-a0fd6411-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.659133456s
Jan  7 12:38:49.383: INFO: Pod "downwardapi-volume-a0fd6411-314a-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.672609297s
STEP: Saw pod success
Jan  7 12:38:49.383: INFO: Pod "downwardapi-volume-a0fd6411-314a-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:38:49.386: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a0fd6411-314a-11ea-8b51-0242ac110005 container client-container: 
STEP: delete the pod
Jan  7 12:38:49.458: INFO: Waiting for pod downwardapi-volume-a0fd6411-314a-11ea-8b51-0242ac110005 to disappear
Jan  7 12:38:49.473: INFO: Pod downwardapi-volume-a0fd6411-314a-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:38:49.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vprgr" for this suite.
Jan  7 12:38:56.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:38:57.026: INFO: namespace: e2e-tests-downward-api-vprgr, resource: bindings, ignored listing per whitelist
Jan  7 12:38:57.041: INFO: namespace e2e-tests-downward-api-vprgr deletion completed in 7.463044677s

• [SLOW TEST:18.711 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:38:57.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-7nkkb
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  7 12:38:57.283: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  7 12:39:33.617: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-7nkkb PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 12:39:33.618: INFO: >>> kubeConfig: /root/.kube/config
I0107 12:39:33.740887       8 log.go:172] (0xc0001e34a0) (0xc0019cad20) Create stream
I0107 12:39:33.741040       8 log.go:172] (0xc0001e34a0) (0xc0019cad20) Stream added, broadcasting: 1
I0107 12:39:33.749675       8 log.go:172] (0xc0001e34a0) Reply frame received for 1
I0107 12:39:33.749718       8 log.go:172] (0xc0001e34a0) (0xc0017c3360) Create stream
I0107 12:39:33.749767       8 log.go:172] (0xc0001e34a0) (0xc0017c3360) Stream added, broadcasting: 3
I0107 12:39:33.751311       8 log.go:172] (0xc0001e34a0) Reply frame received for 3
I0107 12:39:33.751357       8 log.go:172] (0xc0001e34a0) (0xc001b89900) Create stream
I0107 12:39:33.751376       8 log.go:172] (0xc0001e34a0) (0xc001b89900) Stream added, broadcasting: 5
I0107 12:39:33.752742       8 log.go:172] (0xc0001e34a0) Reply frame received for 5
I0107 12:39:35.010102       8 log.go:172] (0xc0001e34a0) Data frame received for 3
I0107 12:39:35.010197       8 log.go:172] (0xc0017c3360) (3) Data frame handling
I0107 12:39:35.010234       8 log.go:172] (0xc0017c3360) (3) Data frame sent
I0107 12:39:35.160350       8 log.go:172] (0xc0001e34a0) (0xc0017c3360) Stream removed, broadcasting: 3
I0107 12:39:35.160634       8 log.go:172] (0xc0001e34a0) Data frame received for 1
I0107 12:39:35.160646       8 log.go:172] (0xc0019cad20) (1) Data frame handling
I0107 12:39:35.160654       8 log.go:172] (0xc0019cad20) (1) Data frame sent
I0107 12:39:35.160659       8 log.go:172] (0xc0001e34a0) (0xc0019cad20) Stream removed, broadcasting: 1
I0107 12:39:35.160903       8 log.go:172] (0xc0001e34a0) (0xc001b89900) Stream removed, broadcasting: 5
I0107 12:39:35.160929       8 log.go:172] (0xc0001e34a0) (0xc0019cad20) Stream removed, broadcasting: 1
I0107 12:39:35.160935       8 log.go:172] (0xc0001e34a0) (0xc0017c3360) Stream removed, broadcasting: 3
I0107 12:39:35.160941       8 log.go:172] (0xc0001e34a0) (0xc001b89900) Stream removed, broadcasting: 5
I0107 12:39:35.161313       8 log.go:172] (0xc0001e34a0) Go away received
Jan  7 12:39:35.161: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:39:35.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-7nkkb" for this suite.
Jan  7 12:40:01.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:40:01.460: INFO: namespace: e2e-tests-pod-network-test-7nkkb, resource: bindings, ignored listing per whitelist
Jan  7 12:40:01.484: INFO: namespace e2e-tests-pod-network-test-7nkkb deletion completed in 26.298821782s

• [SLOW TEST:64.442 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:40:01.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  7 12:40:01.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-sc7n4'
Jan  7 12:40:01.897: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  7 12:40:01.898: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan  7 12:40:02.074: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-fzfdj]
Jan  7 12:40:02.075: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-fzfdj" in namespace "e2e-tests-kubectl-sc7n4" to be "running and ready"
Jan  7 12:40:02.139: INFO: Pod "e2e-test-nginx-rc-fzfdj": Phase="Pending", Reason="", readiness=false. Elapsed: 63.744683ms
Jan  7 12:40:04.150: INFO: Pod "e2e-test-nginx-rc-fzfdj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07497357s
Jan  7 12:40:06.164: INFO: Pod "e2e-test-nginx-rc-fzfdj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089705795s
Jan  7 12:40:08.179: INFO: Pod "e2e-test-nginx-rc-fzfdj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10386767s
Jan  7 12:40:10.199: INFO: Pod "e2e-test-nginx-rc-fzfdj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124214393s
Jan  7 12:40:12.215: INFO: Pod "e2e-test-nginx-rc-fzfdj": Phase="Running", Reason="", readiness=true. Elapsed: 10.140113625s
Jan  7 12:40:12.215: INFO: Pod "e2e-test-nginx-rc-fzfdj" satisfied condition "running and ready"
Jan  7 12:40:12.215: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-fzfdj]
Jan  7 12:40:12.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-sc7n4'
Jan  7 12:40:12.396: INFO: stderr: ""
Jan  7 12:40:12.396: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jan  7 12:40:12.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-sc7n4'
Jan  7 12:40:12.838: INFO: stderr: ""
Jan  7 12:40:12.838: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:40:12.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-sc7n4" for this suite.
Jan  7 12:40:35.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:40:35.145: INFO: namespace: e2e-tests-kubectl-sc7n4, resource: bindings, ignored listing per whitelist
Jan  7 12:40:35.160: INFO: namespace e2e-tests-kubectl-sc7n4 deletion completed in 22.258423139s

• [SLOW TEST:33.675 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:40:35.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-e697f937-314a-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  7 12:40:35.447: INFO: Waiting up to 5m0s for pod "pod-secrets-e6995123-314a-11ea-8b51-0242ac110005" in namespace "e2e-tests-secrets-jg4rf" to be "success or failure"
Jan  7 12:40:35.501: INFO: Pod "pod-secrets-e6995123-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 53.279417ms
Jan  7 12:40:37.515: INFO: Pod "pod-secrets-e6995123-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067620477s
Jan  7 12:40:39.556: INFO: Pod "pod-secrets-e6995123-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108246015s
Jan  7 12:40:41.879: INFO: Pod "pod-secrets-e6995123-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.431495318s
Jan  7 12:40:44.515: INFO: Pod "pod-secrets-e6995123-314a-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.067434923s
Jan  7 12:40:46.554: INFO: Pod "pod-secrets-e6995123-314a-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.105996787s
STEP: Saw pod success
Jan  7 12:40:46.554: INFO: Pod "pod-secrets-e6995123-314a-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:40:46.619: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-e6995123-314a-11ea-8b51-0242ac110005 container secret-env-test: 
STEP: delete the pod
Jan  7 12:40:47.664: INFO: Waiting for pod pod-secrets-e6995123-314a-11ea-8b51-0242ac110005 to disappear
Jan  7 12:40:48.064: INFO: Pod pod-secrets-e6995123-314a-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:40:48.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-jg4rf" for this suite.
Jan  7 12:40:54.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:40:54.282: INFO: namespace: e2e-tests-secrets-jg4rf, resource: bindings, ignored listing per whitelist
Jan  7 12:40:54.359: INFO: namespace e2e-tests-secrets-jg4rf deletion completed in 6.277822909s

• [SLOW TEST:19.198 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:40:54.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-h52sn
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jan  7 12:40:54.788: INFO: Found 0 stateful pods, waiting for 3
Jan  7 12:41:04.806: INFO: Found 1 stateful pods, waiting for 3
Jan  7 12:41:14.803: INFO: Found 2 stateful pods, waiting for 3
Jan  7 12:41:24.805: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 12:41:24.805: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 12:41:24.805: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  7 12:41:34.799: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 12:41:34.799: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 12:41:34.799: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  7 12:41:34.871: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan  7 12:41:45.018: INFO: Updating stateful set ss2
Jan  7 12:41:45.043: INFO: Waiting for Pod e2e-tests-statefulset-h52sn/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 12:41:55.074: INFO: Waiting for Pod e2e-tests-statefulset-h52sn/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan  7 12:42:06.570: INFO: Found 2 stateful pods, waiting for 3
Jan  7 12:42:16.605: INFO: Found 2 stateful pods, waiting for 3
Jan  7 12:42:26.608: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 12:42:26.608: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 12:42:26.608: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  7 12:42:36.602: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 12:42:36.602: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 12:42:36.602: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan  7 12:42:36.659: INFO: Updating stateful set ss2
Jan  7 12:42:36.696: INFO: Waiting for Pod e2e-tests-statefulset-h52sn/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 12:42:46.864: INFO: Updating stateful set ss2
Jan  7 12:42:47.030: INFO: Waiting for StatefulSet e2e-tests-statefulset-h52sn/ss2 to complete update
Jan  7 12:42:47.030: INFO: Waiting for Pod e2e-tests-statefulset-h52sn/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 12:42:57.067: INFO: Waiting for StatefulSet e2e-tests-statefulset-h52sn/ss2 to complete update
Jan  7 12:42:57.067: INFO: Waiting for Pod e2e-tests-statefulset-h52sn/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 12:43:07.139: INFO: Waiting for StatefulSet e2e-tests-statefulset-h52sn/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  7 12:43:17.071: INFO: Deleting all statefulset in ns e2e-tests-statefulset-h52sn
Jan  7 12:43:17.077: INFO: Scaling statefulset ss2 to 0
Jan  7 12:43:47.152: INFO: Waiting for statefulset status.replicas updated to 0
Jan  7 12:43:47.164: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:43:47.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-h52sn" for this suite.
Jan  7 12:43:55.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:43:55.389: INFO: namespace: e2e-tests-statefulset-h52sn, resource: bindings, ignored listing per whitelist
Jan  7 12:43:55.598: INFO: namespace e2e-tests-statefulset-h52sn deletion completed in 8.342886934s

• [SLOW TEST:181.239 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:43:55.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-5e064bdc-314b-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  7 12:43:55.826: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5e06fd12-314b-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-9w4v4" to be "success or failure"
Jan  7 12:43:55.912: INFO: Pod "pod-projected-secrets-5e06fd12-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 85.176154ms
Jan  7 12:43:57.937: INFO: Pod "pod-projected-secrets-5e06fd12-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109916112s
Jan  7 12:43:59.959: INFO: Pod "pod-projected-secrets-5e06fd12-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132762558s
Jan  7 12:44:02.202: INFO: Pod "pod-projected-secrets-5e06fd12-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.375606701s
Jan  7 12:44:04.218: INFO: Pod "pod-projected-secrets-5e06fd12-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.391516991s
Jan  7 12:44:06.499: INFO: Pod "pod-projected-secrets-5e06fd12-314b-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.672786561s
STEP: Saw pod success
Jan  7 12:44:06.500: INFO: Pod "pod-projected-secrets-5e06fd12-314b-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:44:06.513: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-5e06fd12-314b-11ea-8b51-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  7 12:44:07.041: INFO: Waiting for pod pod-projected-secrets-5e06fd12-314b-11ea-8b51-0242ac110005 to disappear
Jan  7 12:44:07.072: INFO: Pod pod-projected-secrets-5e06fd12-314b-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:44:07.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9w4v4" for this suite.
Jan  7 12:44:13.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:44:13.208: INFO: namespace: e2e-tests-projected-9w4v4, resource: bindings, ignored listing per whitelist
Jan  7 12:44:13.281: INFO: namespace e2e-tests-projected-9w4v4 deletion completed in 6.196691558s

• [SLOW TEST:17.682 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:44:13.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-rmr8n
I0107 12:44:13.515053       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-rmr8n, replica count: 1
I0107 12:44:14.566623       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 12:44:15.567854       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 12:44:16.568660       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 12:44:17.569233       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 12:44:18.570136       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 12:44:19.571156       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 12:44:20.571867       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 12:44:21.572497       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 12:44:22.573439       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0107 12:44:23.574036       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  7 12:44:23.789: INFO: Created: latency-svc-mgvkr
Jan  7 12:44:23.919: INFO: Got endpoints: latency-svc-mgvkr [244.716784ms]
Jan  7 12:44:24.044: INFO: Created: latency-svc-m8z7g
Jan  7 12:44:24.129: INFO: Got endpoints: latency-svc-m8z7g [207.90519ms]
Jan  7 12:44:24.149: INFO: Created: latency-svc-skxk7
Jan  7 12:44:24.257: INFO: Got endpoints: latency-svc-skxk7 [336.211913ms]
Jan  7 12:44:24.283: INFO: Created: latency-svc-nfprl
Jan  7 12:44:24.326: INFO: Got endpoints: latency-svc-nfprl [406.344353ms]
Jan  7 12:44:24.539: INFO: Created: latency-svc-w4v59
Jan  7 12:44:24.559: INFO: Got endpoints: latency-svc-w4v59 [638.202397ms]
Jan  7 12:44:24.687: INFO: Created: latency-svc-zj7gd
Jan  7 12:44:24.745: INFO: Created: latency-svc-c27lk
Jan  7 12:44:24.755: INFO: Got endpoints: latency-svc-zj7gd [833.786192ms]
Jan  7 12:44:24.769: INFO: Got endpoints: latency-svc-c27lk [848.5321ms]
Jan  7 12:44:24.977: INFO: Created: latency-svc-w2rr7
Jan  7 12:44:25.178: INFO: Got endpoints: latency-svc-w2rr7 [1.256690875s]
Jan  7 12:44:25.201: INFO: Created: latency-svc-bg44g
Jan  7 12:44:25.207: INFO: Got endpoints: latency-svc-bg44g [1.286024459s]
Jan  7 12:44:25.359: INFO: Created: latency-svc-sqt64
Jan  7 12:44:25.384: INFO: Got endpoints: latency-svc-sqt64 [1.463134379s]
Jan  7 12:44:25.461: INFO: Created: latency-svc-d2fr2
Jan  7 12:44:25.561: INFO: Got endpoints: latency-svc-d2fr2 [1.639458747s]
Jan  7 12:44:25.656: INFO: Created: latency-svc-wk65n
Jan  7 12:44:25.796: INFO: Got endpoints: latency-svc-wk65n [1.876298284s]
Jan  7 12:44:25.839: INFO: Created: latency-svc-wng8q
Jan  7 12:44:26.003: INFO: Got endpoints: latency-svc-wng8q [2.081487818s]
Jan  7 12:44:26.033: INFO: Created: latency-svc-tkgc5
Jan  7 12:44:26.039: INFO: Got endpoints: latency-svc-tkgc5 [2.118110455s]
Jan  7 12:44:26.198: INFO: Created: latency-svc-dd755
Jan  7 12:44:26.212: INFO: Got endpoints: latency-svc-dd755 [2.290747875s]
Jan  7 12:44:26.458: INFO: Created: latency-svc-qdjvh
Jan  7 12:44:26.504: INFO: Got endpoints: latency-svc-qdjvh [2.582660026s]
Jan  7 12:44:26.557: INFO: Created: latency-svc-dxpdd
Jan  7 12:44:26.681: INFO: Got endpoints: latency-svc-dxpdd [2.552281632s]
Jan  7 12:44:26.730: INFO: Created: latency-svc-j88tk
Jan  7 12:44:26.760: INFO: Got endpoints: latency-svc-j88tk [2.502568633s]
Jan  7 12:44:26.905: INFO: Created: latency-svc-6xq4b
Jan  7 12:44:26.942: INFO: Got endpoints: latency-svc-6xq4b [2.615382406s]
Jan  7 12:44:27.089: INFO: Created: latency-svc-jqx79
Jan  7 12:44:27.089: INFO: Got endpoints: latency-svc-jqx79 [2.529415947s]
Jan  7 12:44:27.134: INFO: Created: latency-svc-8f474
Jan  7 12:44:27.138: INFO: Got endpoints: latency-svc-8f474 [2.382704539s]
Jan  7 12:44:27.282: INFO: Created: latency-svc-lgbfs
Jan  7 12:44:27.298: INFO: Got endpoints: latency-svc-lgbfs [2.528262502s]
Jan  7 12:44:27.503: INFO: Created: latency-svc-9mslk
Jan  7 12:44:27.504: INFO: Got endpoints: latency-svc-9mslk [2.325813075s]
Jan  7 12:44:27.761: INFO: Created: latency-svc-z852s
Jan  7 12:44:27.761: INFO: Got endpoints: latency-svc-z852s [2.554114161s]
Jan  7 12:44:27.821: INFO: Created: latency-svc-nvhck
Jan  7 12:44:28.075: INFO: Got endpoints: latency-svc-nvhck [2.690255145s]
Jan  7 12:44:28.112: INFO: Created: latency-svc-7rb7k
Jan  7 12:44:28.135: INFO: Got endpoints: latency-svc-7rb7k [2.57409881s]
Jan  7 12:44:28.377: INFO: Created: latency-svc-ks7n4
Jan  7 12:44:28.391: INFO: Got endpoints: latency-svc-ks7n4 [2.594117569s]
Jan  7 12:44:28.516: INFO: Created: latency-svc-hz6vv
Jan  7 12:44:28.549: INFO: Got endpoints: latency-svc-hz6vv [2.545748888s]
Jan  7 12:44:28.755: INFO: Created: latency-svc-ks899
Jan  7 12:44:28.755: INFO: Got endpoints: latency-svc-ks899 [2.715894859s]
Jan  7 12:44:28.969: INFO: Created: latency-svc-k8ljl
Jan  7 12:44:28.974: INFO: Got endpoints: latency-svc-k8ljl [2.762091637s]
Jan  7 12:44:29.222: INFO: Created: latency-svc-6l5vr
Jan  7 12:44:29.223: INFO: Got endpoints: latency-svc-6l5vr [2.718741328s]
Jan  7 12:44:29.304: INFO: Created: latency-svc-5jwrr
Jan  7 12:44:29.465: INFO: Got endpoints: latency-svc-5jwrr [2.782951699s]
Jan  7 12:44:29.524: INFO: Created: latency-svc-s9vwb
Jan  7 12:44:29.550: INFO: Got endpoints: latency-svc-s9vwb [2.789553419s]
Jan  7 12:44:29.813: INFO: Created: latency-svc-qg6h9
Jan  7 12:44:29.875: INFO: Got endpoints: latency-svc-qg6h9 [2.932262329s]
Jan  7 12:44:29.879: INFO: Created: latency-svc-tggwg
Jan  7 12:44:29.997: INFO: Got endpoints: latency-svc-tggwg [2.908164943s]
Jan  7 12:44:30.052: INFO: Created: latency-svc-kgxw8
Jan  7 12:44:30.062: INFO: Got endpoints: latency-svc-kgxw8 [2.923865976s]
Jan  7 12:44:30.241: INFO: Created: latency-svc-rsczb
Jan  7 12:44:30.383: INFO: Got endpoints: latency-svc-rsczb [3.085525119s]
Jan  7 12:44:30.422: INFO: Created: latency-svc-cp7kq
Jan  7 12:44:30.431: INFO: Got endpoints: latency-svc-cp7kq [2.927527655s]
Jan  7 12:44:30.686: INFO: Created: latency-svc-mm7jp
Jan  7 12:44:30.746: INFO: Got endpoints: latency-svc-mm7jp [2.984302949s]
Jan  7 12:44:30.757: INFO: Created: latency-svc-mbj9j
Jan  7 12:44:30.906: INFO: Created: latency-svc-5m9mj
Jan  7 12:44:30.912: INFO: Got endpoints: latency-svc-mbj9j [2.836685246s]
Jan  7 12:44:30.947: INFO: Got endpoints: latency-svc-5m9mj [2.811645621s]
Jan  7 12:44:31.092: INFO: Created: latency-svc-zp9rf
Jan  7 12:44:31.101: INFO: Got endpoints: latency-svc-zp9rf [2.710104449s]
Jan  7 12:44:31.147: INFO: Created: latency-svc-hjzmd
Jan  7 12:44:31.178: INFO: Got endpoints: latency-svc-hjzmd [2.629136516s]
Jan  7 12:44:31.329: INFO: Created: latency-svc-vrg5c
Jan  7 12:44:31.339: INFO: Got endpoints: latency-svc-vrg5c [2.583342442s]
Jan  7 12:44:31.400: INFO: Created: latency-svc-zhsmw
Jan  7 12:44:31.518: INFO: Got endpoints: latency-svc-zhsmw [2.54361555s]
Jan  7 12:44:31.546: INFO: Created: latency-svc-h4nzv
Jan  7 12:44:31.557: INFO: Got endpoints: latency-svc-h4nzv [2.33428073s]
Jan  7 12:44:31.760: INFO: Created: latency-svc-8r2kr
Jan  7 12:44:31.777: INFO: Got endpoints: latency-svc-8r2kr [2.311040802s]
Jan  7 12:44:31.869: INFO: Created: latency-svc-8nmh5
Jan  7 12:44:31.940: INFO: Got endpoints: latency-svc-8nmh5 [2.39009088s]
Jan  7 12:44:31.987: INFO: Created: latency-svc-4hcql
Jan  7 12:44:32.032: INFO: Got endpoints: latency-svc-4hcql [2.156819109s]
Jan  7 12:44:32.219: INFO: Created: latency-svc-hdbhk
Jan  7 12:44:32.219: INFO: Created: latency-svc-f54d8
Jan  7 12:44:32.244: INFO: Got endpoints: latency-svc-hdbhk [2.247080469s]
Jan  7 12:44:32.245: INFO: Got endpoints: latency-svc-f54d8 [304.053519ms]
Jan  7 12:44:32.410: INFO: Created: latency-svc-r2mxj
Jan  7 12:44:32.436: INFO: Got endpoints: latency-svc-r2mxj [2.374002258s]
Jan  7 12:44:32.619: INFO: Created: latency-svc-6q958
Jan  7 12:44:32.625: INFO: Got endpoints: latency-svc-6q958 [2.241370054s]
Jan  7 12:44:32.668: INFO: Created: latency-svc-89p2z
Jan  7 12:44:32.777: INFO: Got endpoints: latency-svc-89p2z [2.345723885s]
Jan  7 12:44:32.800: INFO: Created: latency-svc-xvz66
Jan  7 12:44:32.857: INFO: Got endpoints: latency-svc-xvz66 [2.110801635s]
Jan  7 12:44:32.956: INFO: Created: latency-svc-96tsr
Jan  7 12:44:32.981: INFO: Got endpoints: latency-svc-96tsr [2.068542402s]
Jan  7 12:44:33.026: INFO: Created: latency-svc-qdqqm
Jan  7 12:44:33.139: INFO: Got endpoints: latency-svc-qdqqm [2.191823149s]
Jan  7 12:44:33.167: INFO: Created: latency-svc-2hn72
Jan  7 12:44:33.169: INFO: Got endpoints: latency-svc-2hn72 [2.068000342s]
Jan  7 12:44:33.257: INFO: Created: latency-svc-gxgtq
Jan  7 12:44:33.351: INFO: Got endpoints: latency-svc-gxgtq [2.171925655s]
Jan  7 12:44:33.372: INFO: Created: latency-svc-r6bbf
Jan  7 12:44:33.378: INFO: Got endpoints: latency-svc-r6bbf [2.038588875s]
Jan  7 12:44:33.432: INFO: Created: latency-svc-px8rr
Jan  7 12:44:33.598: INFO: Got endpoints: latency-svc-px8rr [2.079306148s]
Jan  7 12:44:33.637: INFO: Created: latency-svc-2hm8m
Jan  7 12:44:33.685: INFO: Got endpoints: latency-svc-2hm8m [2.127659643s]
Jan  7 12:44:33.854: INFO: Created: latency-svc-nl2lc
Jan  7 12:44:33.889: INFO: Got endpoints: latency-svc-nl2lc [2.112462906s]
Jan  7 12:44:34.048: INFO: Created: latency-svc-4f9wg
Jan  7 12:44:34.247: INFO: Created: latency-svc-mxmgz
Jan  7 12:44:34.272: INFO: Got endpoints: latency-svc-4f9wg [2.239005832s]
Jan  7 12:44:34.282: INFO: Got endpoints: latency-svc-mxmgz [2.037178191s]
Jan  7 12:44:34.311: INFO: Created: latency-svc-zcghb
Jan  7 12:44:34.481: INFO: Got endpoints: latency-svc-zcghb [2.236253767s]
Jan  7 12:44:34.553: INFO: Created: latency-svc-t7n2n
Jan  7 12:44:34.750: INFO: Got endpoints: latency-svc-t7n2n [2.314242198s]
Jan  7 12:44:34.785: INFO: Created: latency-svc-jk4wt
Jan  7 12:44:34.820: INFO: Got endpoints: latency-svc-jk4wt [2.194518737s]
Jan  7 12:44:34.984: INFO: Created: latency-svc-nsckb
Jan  7 12:44:34.997: INFO: Got endpoints: latency-svc-nsckb [2.219657836s]
Jan  7 12:44:35.232: INFO: Created: latency-svc-4ff9h
Jan  7 12:44:35.233: INFO: Got endpoints: latency-svc-4ff9h [2.37474368s]
Jan  7 12:44:35.248: INFO: Created: latency-svc-8l9cd
Jan  7 12:44:35.274: INFO: Got endpoints: latency-svc-8l9cd [2.293296928s]
Jan  7 12:44:35.395: INFO: Created: latency-svc-rjslv
Jan  7 12:44:35.419: INFO: Got endpoints: latency-svc-rjslv [2.279559213s]
Jan  7 12:44:35.590: INFO: Created: latency-svc-dlhln
Jan  7 12:44:35.628: INFO: Got endpoints: latency-svc-dlhln [2.458426702s]
Jan  7 12:44:35.673: INFO: Created: latency-svc-rr22t
Jan  7 12:44:35.760: INFO: Got endpoints: latency-svc-rr22t [2.40921653s]
Jan  7 12:44:35.835: INFO: Created: latency-svc-tg7sv
Jan  7 12:44:36.072: INFO: Got endpoints: latency-svc-tg7sv [2.693991567s]
Jan  7 12:44:36.112: INFO: Created: latency-svc-wkhrk
Jan  7 12:44:36.127: INFO: Got endpoints: latency-svc-wkhrk [2.528798972s]
Jan  7 12:44:36.290: INFO: Created: latency-svc-66xwb
Jan  7 12:44:36.305: INFO: Got endpoints: latency-svc-66xwb [2.619270512s]
Jan  7 12:44:36.577: INFO: Created: latency-svc-r7kfs
Jan  7 12:44:36.585: INFO: Got endpoints: latency-svc-r7kfs [2.695389635s]
Jan  7 12:44:36.638: INFO: Created: latency-svc-gvnk4
Jan  7 12:44:36.688: INFO: Got endpoints: latency-svc-gvnk4 [2.416110622s]
Jan  7 12:44:36.940: INFO: Created: latency-svc-hbn2q
Jan  7 12:44:36.971: INFO: Got endpoints: latency-svc-hbn2q [2.689506728s]
Jan  7 12:44:37.183: INFO: Created: latency-svc-gk5pz
Jan  7 12:44:37.216: INFO: Got endpoints: latency-svc-gk5pz [2.734359172s]
Jan  7 12:44:37.541: INFO: Created: latency-svc-czhg4
Jan  7 12:44:37.682: INFO: Got endpoints: latency-svc-czhg4 [2.931695389s]
Jan  7 12:44:37.708: INFO: Created: latency-svc-2m72x
Jan  7 12:44:37.725: INFO: Got endpoints: latency-svc-2m72x [2.903797082s]
Jan  7 12:44:37.776: INFO: Created: latency-svc-22fwq
Jan  7 12:44:37.855: INFO: Got endpoints: latency-svc-22fwq [2.857884275s]
Jan  7 12:44:37.947: INFO: Created: latency-svc-vqqv5
Jan  7 12:44:38.098: INFO: Got endpoints: latency-svc-vqqv5 [2.865683312s]
Jan  7 12:44:38.171: INFO: Created: latency-svc-np5w5
Jan  7 12:44:38.171: INFO: Got endpoints: latency-svc-np5w5 [2.896336763s]
Jan  7 12:44:38.292: INFO: Created: latency-svc-gq56h
Jan  7 12:44:38.299: INFO: Got endpoints: latency-svc-gq56h [2.879319053s]
Jan  7 12:44:38.457: INFO: Created: latency-svc-8ll8c
Jan  7 12:44:38.475: INFO: Got endpoints: latency-svc-8ll8c [2.847546259s]
Jan  7 12:44:38.643: INFO: Created: latency-svc-vgpm4
Jan  7 12:44:38.742: INFO: Got endpoints: latency-svc-vgpm4 [2.981647936s]
Jan  7 12:44:38.882: INFO: Created: latency-svc-6jstv
Jan  7 12:44:38.918: INFO: Got endpoints: latency-svc-6jstv [2.84568455s]
Jan  7 12:44:38.959: INFO: Created: latency-svc-dsmt7
Jan  7 12:44:39.044: INFO: Got endpoints: latency-svc-dsmt7 [2.917192028s]
Jan  7 12:44:39.068: INFO: Created: latency-svc-ctq5n
Jan  7 12:44:39.085: INFO: Got endpoints: latency-svc-ctq5n [2.779955936s]
Jan  7 12:44:39.283: INFO: Created: latency-svc-29fmr
Jan  7 12:44:39.298: INFO: Got endpoints: latency-svc-29fmr [2.713167327s]
Jan  7 12:44:39.486: INFO: Created: latency-svc-tnktz
Jan  7 12:44:39.491: INFO: Got endpoints: latency-svc-tnktz [2.802728193s]
Jan  7 12:44:39.696: INFO: Created: latency-svc-tmzsg
Jan  7 12:44:39.718: INFO: Got endpoints: latency-svc-tmzsg [2.746218022s]
Jan  7 12:44:39.775: INFO: Created: latency-svc-gt7p5
Jan  7 12:44:39.880: INFO: Got endpoints: latency-svc-gt7p5 [2.66352552s]
Jan  7 12:44:39.904: INFO: Created: latency-svc-s87rm
Jan  7 12:44:39.921: INFO: Got endpoints: latency-svc-s87rm [2.238931936s]
Jan  7 12:44:40.151: INFO: Created: latency-svc-rrnnj
Jan  7 12:44:40.167: INFO: Got endpoints: latency-svc-rrnnj [2.442147465s]
Jan  7 12:44:40.304: INFO: Created: latency-svc-tq2mq
Jan  7 12:44:40.309: INFO: Got endpoints: latency-svc-tq2mq [2.453820917s]
Jan  7 12:44:40.354: INFO: Created: latency-svc-jh9lv
Jan  7 12:44:40.371: INFO: Got endpoints: latency-svc-jh9lv [2.272700639s]
Jan  7 12:44:40.505: INFO: Created: latency-svc-2bxv7
Jan  7 12:44:40.552: INFO: Got endpoints: latency-svc-2bxv7 [2.380752747s]
Jan  7 12:44:40.744: INFO: Created: latency-svc-svmsb
Jan  7 12:44:40.744: INFO: Got endpoints: latency-svc-svmsb [2.445256668s]
Jan  7 12:44:40.780: INFO: Created: latency-svc-fcvhc
Jan  7 12:44:40.898: INFO: Got endpoints: latency-svc-fcvhc [2.421935126s]
Jan  7 12:44:40.965: INFO: Created: latency-svc-mrpqh
Jan  7 12:44:40.966: INFO: Got endpoints: latency-svc-mrpqh [2.222669263s]
Jan  7 12:44:41.188: INFO: Created: latency-svc-nqpp8
Jan  7 12:44:41.218: INFO: Got endpoints: latency-svc-nqpp8 [2.299401572s]
Jan  7 12:44:41.371: INFO: Created: latency-svc-8whlp
Jan  7 12:44:41.382: INFO: Got endpoints: latency-svc-8whlp [2.337133913s]
Jan  7 12:44:41.580: INFO: Created: latency-svc-9t6sl
Jan  7 12:44:41.610: INFO: Got endpoints: latency-svc-9t6sl [2.524153837s]
Jan  7 12:44:41.762: INFO: Created: latency-svc-gz8m9
Jan  7 12:44:41.771: INFO: Got endpoints: latency-svc-gz8m9 [2.472479415s]
Jan  7 12:44:41.972: INFO: Created: latency-svc-5qmzp
Jan  7 12:44:41.980: INFO: Got endpoints: latency-svc-5qmzp [2.488802424s]
Jan  7 12:44:42.183: INFO: Created: latency-svc-kmkjz
Jan  7 12:44:42.201: INFO: Got endpoints: latency-svc-kmkjz [2.483322377s]
Jan  7 12:44:42.272: INFO: Created: latency-svc-227kz
Jan  7 12:44:42.396: INFO: Got endpoints: latency-svc-227kz [2.515989637s]
Jan  7 12:44:42.431: INFO: Created: latency-svc-g4rgg
Jan  7 12:44:42.472: INFO: Got endpoints: latency-svc-g4rgg [2.550899337s]
Jan  7 12:44:42.666: INFO: Created: latency-svc-8ch5d
Jan  7 12:44:42.695: INFO: Got endpoints: latency-svc-8ch5d [2.52814369s]
Jan  7 12:44:42.798: INFO: Created: latency-svc-sb7bz
Jan  7 12:44:42.827: INFO: Got endpoints: latency-svc-sb7bz [2.517296604s]
Jan  7 12:44:42.923: INFO: Created: latency-svc-7z2sx
Jan  7 12:44:43.032: INFO: Got endpoints: latency-svc-7z2sx [2.661026443s]
Jan  7 12:44:43.051: INFO: Created: latency-svc-k9kvx
Jan  7 12:44:43.127: INFO: Got endpoints: latency-svc-k9kvx [2.574545383s]
Jan  7 12:44:43.303: INFO: Created: latency-svc-7hm4m
Jan  7 12:44:43.310: INFO: Got endpoints: latency-svc-7hm4m [2.565489218s]
Jan  7 12:44:43.589: INFO: Created: latency-svc-x22rl
Jan  7 12:44:43.643: INFO: Got endpoints: latency-svc-x22rl [2.744877037s]
Jan  7 12:44:43.868: INFO: Created: latency-svc-wx4nl
Jan  7 12:44:43.883: INFO: Got endpoints: latency-svc-wx4nl [2.917250221s]
Jan  7 12:44:44.153: INFO: Created: latency-svc-fngrj
Jan  7 12:44:44.154: INFO: Got endpoints: latency-svc-fngrj [2.935968795s]
Jan  7 12:44:44.341: INFO: Created: latency-svc-jv48b
Jan  7 12:44:44.370: INFO: Got endpoints: latency-svc-jv48b [2.987943847s]
Jan  7 12:44:44.559: INFO: Created: latency-svc-mrx4s
Jan  7 12:44:44.566: INFO: Got endpoints: latency-svc-mrx4s [2.956526094s]
Jan  7 12:44:44.728: INFO: Created: latency-svc-zpncb
Jan  7 12:44:44.749: INFO: Got endpoints: latency-svc-zpncb [2.977356397s]
Jan  7 12:44:44.811: INFO: Created: latency-svc-l22t6
Jan  7 12:44:44.823: INFO: Got endpoints: latency-svc-l22t6 [2.843000169s]
Jan  7 12:44:44.954: INFO: Created: latency-svc-fgqrs
Jan  7 12:44:44.988: INFO: Got endpoints: latency-svc-fgqrs [2.78682561s]
Jan  7 12:44:45.000: INFO: Created: latency-svc-g5z8v
Jan  7 12:44:45.013: INFO: Got endpoints: latency-svc-g5z8v [2.615927279s]
Jan  7 12:44:45.143: INFO: Created: latency-svc-nz7rw
Jan  7 12:44:45.150: INFO: Got endpoints: latency-svc-nz7rw [2.677445838s]
Jan  7 12:44:45.202: INFO: Created: latency-svc-2nsgd
Jan  7 12:44:45.330: INFO: Got endpoints: latency-svc-2nsgd [2.63456666s]
Jan  7 12:44:45.353: INFO: Created: latency-svc-6d89w
Jan  7 12:44:45.374: INFO: Got endpoints: latency-svc-6d89w [2.546709141s]
Jan  7 12:44:45.493: INFO: Created: latency-svc-7zpww
Jan  7 12:44:45.513: INFO: Got endpoints: latency-svc-7zpww [2.480630973s]
Jan  7 12:44:45.567: INFO: Created: latency-svc-khkg4
Jan  7 12:44:45.580: INFO: Got endpoints: latency-svc-khkg4 [2.452530657s]
Jan  7 12:44:45.763: INFO: Created: latency-svc-tk746
Jan  7 12:44:45.782: INFO: Got endpoints: latency-svc-tk746 [2.472359588s]
Jan  7 12:44:45.830: INFO: Created: latency-svc-n6dg2
Jan  7 12:44:45.926: INFO: Got endpoints: latency-svc-n6dg2 [2.282149669s]
Jan  7 12:44:45.948: INFO: Created: latency-svc-pwc5g
Jan  7 12:44:45.977: INFO: Got endpoints: latency-svc-pwc5g [2.093675827s]
Jan  7 12:44:46.121: INFO: Created: latency-svc-l2n9x
Jan  7 12:44:46.161: INFO: Got endpoints: latency-svc-l2n9x [2.007073721s]
Jan  7 12:44:46.313: INFO: Created: latency-svc-2khnm
Jan  7 12:44:46.348: INFO: Got endpoints: latency-svc-2khnm [1.977850305s]
Jan  7 12:44:46.597: INFO: Created: latency-svc-wxl5c
Jan  7 12:44:46.646: INFO: Got endpoints: latency-svc-wxl5c [2.079648647s]
Jan  7 12:44:46.762: INFO: Created: latency-svc-g8swx
Jan  7 12:44:46.771: INFO: Got endpoints: latency-svc-g8swx [2.022101956s]
Jan  7 12:44:46.841: INFO: Created: latency-svc-dwl97
Jan  7 12:44:46.971: INFO: Got endpoints: latency-svc-dwl97 [2.147435098s]
Jan  7 12:44:47.001: INFO: Created: latency-svc-dlr4k
Jan  7 12:44:47.001: INFO: Got endpoints: latency-svc-dlr4k [2.012914784s]
Jan  7 12:44:47.082: INFO: Created: latency-svc-jgwqz
Jan  7 12:44:47.181: INFO: Got endpoints: latency-svc-jgwqz [2.168575223s]
Jan  7 12:44:47.209: INFO: Created: latency-svc-q6dtx
Jan  7 12:44:47.218: INFO: Got endpoints: latency-svc-q6dtx [2.06713511s]
Jan  7 12:44:47.268: INFO: Created: latency-svc-hhplj
Jan  7 12:44:47.520: INFO: Got endpoints: latency-svc-hhplj [2.189143314s]
Jan  7 12:44:47.639: INFO: Created: latency-svc-nwxb2
Jan  7 12:44:47.808: INFO: Got endpoints: latency-svc-nwxb2 [2.433455117s]
Jan  7 12:44:47.845: INFO: Created: latency-svc-lcq95
Jan  7 12:44:47.891: INFO: Got endpoints: latency-svc-lcq95 [2.377825042s]
Jan  7 12:44:48.050: INFO: Created: latency-svc-t2h27
Jan  7 12:44:48.080: INFO: Got endpoints: latency-svc-t2h27 [2.500781532s]
Jan  7 12:44:48.330: INFO: Created: latency-svc-zxq9k
Jan  7 12:44:48.394: INFO: Got endpoints: latency-svc-zxq9k [2.611272981s]
Jan  7 12:44:48.592: INFO: Created: latency-svc-slwnb
Jan  7 12:44:48.612: INFO: Got endpoints: latency-svc-slwnb [2.685844306s]
Jan  7 12:44:48.750: INFO: Created: latency-svc-7sw4k
Jan  7 12:44:48.830: INFO: Got endpoints: latency-svc-7sw4k [2.853000161s]
Jan  7 12:44:48.904: INFO: Created: latency-svc-55q4b
Jan  7 12:44:49.030: INFO: Got endpoints: latency-svc-55q4b [2.868688239s]
Jan  7 12:44:49.052: INFO: Created: latency-svc-b8xmh
Jan  7 12:44:49.069: INFO: Got endpoints: latency-svc-b8xmh [2.720265101s]
Jan  7 12:44:49.279: INFO: Created: latency-svc-b8zl9
Jan  7 12:44:49.290: INFO: Got endpoints: latency-svc-b8zl9 [2.64268387s]
Jan  7 12:44:49.458: INFO: Created: latency-svc-8l98d
Jan  7 12:44:49.480: INFO: Got endpoints: latency-svc-8l98d [2.709228348s]
Jan  7 12:44:49.548: INFO: Created: latency-svc-dq2kz
Jan  7 12:44:49.666: INFO: Got endpoints: latency-svc-dq2kz [2.695049731s]
Jan  7 12:44:49.681: INFO: Created: latency-svc-4qlrk
Jan  7 12:44:49.710: INFO: Got endpoints: latency-svc-4qlrk [2.708333278s]
Jan  7 12:44:49.761: INFO: Created: latency-svc-6wk9f
Jan  7 12:44:49.940: INFO: Got endpoints: latency-svc-6wk9f [2.758683858s]
Jan  7 12:44:49.954: INFO: Created: latency-svc-jt8sc
Jan  7 12:44:49.984: INFO: Got endpoints: latency-svc-jt8sc [2.766126798s]
Jan  7 12:44:50.021: INFO: Created: latency-svc-rwfdr
Jan  7 12:44:50.172: INFO: Got endpoints: latency-svc-rwfdr [2.651786354s]
Jan  7 12:44:50.180: INFO: Created: latency-svc-8sfdn
Jan  7 12:44:50.266: INFO: Got endpoints: latency-svc-8sfdn [2.457859494s]
Jan  7 12:44:50.455: INFO: Created: latency-svc-n44jt
Jan  7 12:44:50.478: INFO: Got endpoints: latency-svc-n44jt [2.585996471s]
Jan  7 12:44:50.672: INFO: Created: latency-svc-4bmgp
Jan  7 12:44:50.688: INFO: Got endpoints: latency-svc-4bmgp [2.606761821s]
Jan  7 12:44:50.944: INFO: Created: latency-svc-znqrw
Jan  7 12:44:50.966: INFO: Got endpoints: latency-svc-znqrw [2.571818689s]
Jan  7 12:44:51.155: INFO: Created: latency-svc-lt7tc
Jan  7 12:44:51.174: INFO: Got endpoints: latency-svc-lt7tc [2.562291548s]
Jan  7 12:44:51.249: INFO: Created: latency-svc-q4hq4
Jan  7 12:44:51.496: INFO: Got endpoints: latency-svc-q4hq4 [2.665561321s]
Jan  7 12:44:51.526: INFO: Created: latency-svc-m4b8w
Jan  7 12:44:51.550: INFO: Got endpoints: latency-svc-m4b8w [2.520436463s]
Jan  7 12:44:51.722: INFO: Created: latency-svc-rkd7x
Jan  7 12:44:51.724: INFO: Got endpoints: latency-svc-rkd7x [2.654363214s]
Jan  7 12:44:51.810: INFO: Created: latency-svc-4vgsq
Jan  7 12:44:51.931: INFO: Got endpoints: latency-svc-4vgsq [2.639999636s]
Jan  7 12:44:51.948: INFO: Created: latency-svc-c2799
Jan  7 12:44:51.989: INFO: Got endpoints: latency-svc-c2799 [2.508189642s]
Jan  7 12:44:53.022: INFO: Created: latency-svc-2bn4g
Jan  7 12:44:53.038: INFO: Got endpoints: latency-svc-2bn4g [3.371707284s]
Jan  7 12:44:53.198: INFO: Created: latency-svc-8ghs9
Jan  7 12:44:53.255: INFO: Got endpoints: latency-svc-8ghs9 [3.544501851s]
Jan  7 12:44:53.262: INFO: Created: latency-svc-z7dlk
Jan  7 12:44:53.397: INFO: Got endpoints: latency-svc-z7dlk [3.456513335s]
Jan  7 12:44:53.447: INFO: Created: latency-svc-sft2d
Jan  7 12:44:53.652: INFO: Got endpoints: latency-svc-sft2d [3.667720994s]
Jan  7 12:44:53.683: INFO: Created: latency-svc-cj55t
Jan  7 12:44:53.713: INFO: Got endpoints: latency-svc-cj55t [3.54038845s]
Jan  7 12:44:53.864: INFO: Created: latency-svc-722nz
Jan  7 12:44:53.901: INFO: Got endpoints: latency-svc-722nz [3.634294552s]
Jan  7 12:44:54.058: INFO: Created: latency-svc-g8bc4
Jan  7 12:44:54.079: INFO: Got endpoints: latency-svc-g8bc4 [3.601516765s]
Jan  7 12:44:54.149: INFO: Created: latency-svc-j5b49
Jan  7 12:44:54.224: INFO: Got endpoints: latency-svc-j5b49 [3.536771417s]
Jan  7 12:44:54.272: INFO: Created: latency-svc-2jhc7
Jan  7 12:44:54.301: INFO: Got endpoints: latency-svc-2jhc7 [3.334652998s]
Jan  7 12:44:54.474: INFO: Created: latency-svc-knr76
Jan  7 12:44:54.703: INFO: Created: latency-svc-26wjf
Jan  7 12:44:54.755: INFO: Got endpoints: latency-svc-knr76 [3.580150148s]
Jan  7 12:44:54.908: INFO: Created: latency-svc-rr692
Jan  7 12:44:54.938: INFO: Got endpoints: latency-svc-26wjf [3.44157072s]
Jan  7 12:44:54.964: INFO: Got endpoints: latency-svc-rr692 [3.413410248s]
Jan  7 12:44:55.117: INFO: Created: latency-svc-kbwwx
Jan  7 12:44:55.135: INFO: Got endpoints: latency-svc-kbwwx [3.411566529s]
Jan  7 12:44:55.178: INFO: Created: latency-svc-fs6zg
Jan  7 12:44:55.304: INFO: Got endpoints: latency-svc-fs6zg [3.373542029s]
Jan  7 12:44:55.326: INFO: Created: latency-svc-rn8gr
Jan  7 12:44:55.341: INFO: Got endpoints: latency-svc-rn8gr [3.351827729s]
Jan  7 12:44:55.404: INFO: Created: latency-svc-wl6bp
Jan  7 12:44:55.484: INFO: Got endpoints: latency-svc-wl6bp [2.445414271s]
Jan  7 12:44:55.504: INFO: Created: latency-svc-6d28x
Jan  7 12:44:55.514: INFO: Got endpoints: latency-svc-6d28x [2.25917585s]
Jan  7 12:44:55.564: INFO: Created: latency-svc-gr4vr
Jan  7 12:44:55.568: INFO: Got endpoints: latency-svc-gr4vr [2.170234826s]
Jan  7 12:44:55.691: INFO: Created: latency-svc-w9fbk
Jan  7 12:44:55.701: INFO: Got endpoints: latency-svc-w9fbk [2.048256887s]
Jan  7 12:44:55.884: INFO: Created: latency-svc-b2zmz
Jan  7 12:44:55.890: INFO: Got endpoints: latency-svc-b2zmz [2.177016087s]
Jan  7 12:44:56.678: INFO: Created: latency-svc-4bzkw
Jan  7 12:44:56.841: INFO: Got endpoints: latency-svc-4bzkw [2.940117158s]
Jan  7 12:44:57.169: INFO: Created: latency-svc-wmgsr
Jan  7 12:44:57.273: INFO: Got endpoints: latency-svc-wmgsr [3.193342559s]
Jan  7 12:44:57.348: INFO: Created: latency-svc-trnj6
Jan  7 12:44:57.522: INFO: Got endpoints: latency-svc-trnj6 [3.297133648s]
Jan  7 12:44:57.553: INFO: Created: latency-svc-kdfhf
Jan  7 12:44:57.553: INFO: Got endpoints: latency-svc-kdfhf [3.251403117s]
Jan  7 12:44:57.597: INFO: Created: latency-svc-9htc4
Jan  7 12:44:57.777: INFO: Got endpoints: latency-svc-9htc4 [3.021082704s]
Jan  7 12:44:57.809: INFO: Created: latency-svc-g4zlv
Jan  7 12:44:57.828: INFO: Got endpoints: latency-svc-g4zlv [2.889470463s]
Jan  7 12:44:58.024: INFO: Created: latency-svc-dwqzx
Jan  7 12:44:58.024: INFO: Got endpoints: latency-svc-dwqzx [3.059491504s]
Jan  7 12:44:58.032: INFO: Created: latency-svc-r58rh
Jan  7 12:44:58.037: INFO: Got endpoints: latency-svc-r58rh [2.901300321s]
Jan  7 12:44:58.228: INFO: Created: latency-svc-4fc6h
Jan  7 12:44:58.233: INFO: Got endpoints: latency-svc-4fc6h [2.928055528s]
Jan  7 12:44:58.301: INFO: Created: latency-svc-j4h8h
Jan  7 12:44:58.409: INFO: Got endpoints: latency-svc-j4h8h [3.067710199s]
Jan  7 12:44:58.456: INFO: Created: latency-svc-f4pdd
Jan  7 12:44:58.495: INFO: Got endpoints: latency-svc-f4pdd [3.010726252s]
Jan  7 12:44:58.649: INFO: Created: latency-svc-qgb7b
Jan  7 12:44:58.667: INFO: Got endpoints: latency-svc-qgb7b [3.153253088s]
Jan  7 12:44:58.880: INFO: Created: latency-svc-q7vt7
Jan  7 12:44:58.901: INFO: Got endpoints: latency-svc-q7vt7 [3.332859946s]
Jan  7 12:44:58.902: INFO: Latencies: [207.90519ms 304.053519ms 336.211913ms 406.344353ms 638.202397ms 833.786192ms 848.5321ms 1.256690875s 1.286024459s 1.463134379s 1.639458747s 1.876298284s 1.977850305s 2.007073721s 2.012914784s 2.022101956s 2.037178191s 2.038588875s 2.048256887s 2.06713511s 2.068000342s 2.068542402s 2.079306148s 2.079648647s 2.081487818s 2.093675827s 2.110801635s 2.112462906s 2.118110455s 2.127659643s 2.147435098s 2.156819109s 2.168575223s 2.170234826s 2.171925655s 2.177016087s 2.189143314s 2.191823149s 2.194518737s 2.219657836s 2.222669263s 2.236253767s 2.238931936s 2.239005832s 2.241370054s 2.247080469s 2.25917585s 2.272700639s 2.279559213s 2.282149669s 2.290747875s 2.293296928s 2.299401572s 2.311040802s 2.314242198s 2.325813075s 2.33428073s 2.337133913s 2.345723885s 2.374002258s 2.37474368s 2.377825042s 2.380752747s 2.382704539s 2.39009088s 2.40921653s 2.416110622s 2.421935126s 2.433455117s 2.442147465s 2.445256668s 2.445414271s 2.452530657s 2.453820917s 2.457859494s 2.458426702s 2.472359588s 2.472479415s 2.480630973s 2.483322377s 2.488802424s 2.500781532s 2.502568633s 2.508189642s 2.515989637s 2.517296604s 2.520436463s 2.524153837s 2.52814369s 2.528262502s 2.528798972s 2.529415947s 2.54361555s 2.545748888s 2.546709141s 2.550899337s 2.552281632s 2.554114161s 2.562291548s 2.565489218s 2.571818689s 2.57409881s 2.574545383s 2.582660026s 2.583342442s 2.585996471s 2.594117569s 2.606761821s 2.611272981s 2.615382406s 2.615927279s 2.619270512s 2.629136516s 2.63456666s 2.639999636s 2.64268387s 2.651786354s 2.654363214s 2.661026443s 2.66352552s 2.665561321s 2.677445838s 2.685844306s 2.689506728s 2.690255145s 2.693991567s 2.695049731s 2.695389635s 2.708333278s 2.709228348s 2.710104449s 2.713167327s 2.715894859s 2.718741328s 2.720265101s 2.734359172s 2.744877037s 2.746218022s 2.758683858s 2.762091637s 2.766126798s 2.779955936s 2.782951699s 2.78682561s 2.789553419s 2.802728193s 2.811645621s 2.836685246s 2.843000169s 2.84568455s 2.847546259s 2.853000161s 2.857884275s 2.865683312s 2.868688239s 2.879319053s 2.889470463s 2.896336763s 2.901300321s 2.903797082s 2.908164943s 2.917192028s 2.917250221s 2.923865976s 2.927527655s 2.928055528s 2.931695389s 2.932262329s 2.935968795s 2.940117158s 2.956526094s 2.977356397s 2.981647936s 2.984302949s 2.987943847s 3.010726252s 3.021082704s 3.059491504s 3.067710199s 3.085525119s 3.153253088s 3.193342559s 3.251403117s 3.297133648s 3.332859946s 3.334652998s 3.351827729s 3.371707284s 3.373542029s 3.411566529s 3.413410248s 3.44157072s 3.456513335s 3.536771417s 3.54038845s 3.544501851s 3.580150148s 3.601516765s 3.634294552s 3.667720994s]
Jan  7 12:44:58.902: INFO: 50 %ile: 2.571818689s
Jan  7 12:44:58.902: INFO: 90 %ile: 3.153253088s
Jan  7 12:44:58.902: INFO: 99 %ile: 3.634294552s
Jan  7 12:44:58.902: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:44:58.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-rmr8n" for this suite.
Jan  7 12:46:13.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:46:13.191: INFO: namespace: e2e-tests-svc-latency-rmr8n, resource: bindings, ignored listing per whitelist
Jan  7 12:46:13.324: INFO: namespace e2e-tests-svc-latency-rmr8n deletion completed in 1m14.383373074s

• [SLOW TEST:120.042 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:46:13.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-b0391274-314b-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  7 12:46:13.747: INFO: Waiting up to 5m0s for pod "pod-secrets-b03b3406-314b-11ea-8b51-0242ac110005" in namespace "e2e-tests-secrets-fv2pw" to be "success or failure"
Jan  7 12:46:13.787: INFO: Pod "pod-secrets-b03b3406-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.66164ms
Jan  7 12:46:15.808: INFO: Pod "pod-secrets-b03b3406-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060863153s
Jan  7 12:46:17.824: INFO: Pod "pod-secrets-b03b3406-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076672005s
Jan  7 12:46:19.874: INFO: Pod "pod-secrets-b03b3406-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12658824s
Jan  7 12:46:21.968: INFO: Pod "pod-secrets-b03b3406-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.221547846s
Jan  7 12:46:24.001: INFO: Pod "pod-secrets-b03b3406-314b-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.254146669s
STEP: Saw pod success
Jan  7 12:46:24.001: INFO: Pod "pod-secrets-b03b3406-314b-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:46:24.011: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b03b3406-314b-11ea-8b51-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  7 12:46:24.601: INFO: Waiting for pod pod-secrets-b03b3406-314b-11ea-8b51-0242ac110005 to disappear
Jan  7 12:46:24.629: INFO: Pod pod-secrets-b03b3406-314b-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:46:24.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-fv2pw" for this suite.
Jan  7 12:46:30.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:46:31.072: INFO: namespace: e2e-tests-secrets-fv2pw, resource: bindings, ignored listing per whitelist
Jan  7 12:46:31.136: INFO: namespace e2e-tests-secrets-fv2pw deletion completed in 6.493686139s

• [SLOW TEST:17.811 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:46:31.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  7 12:46:31.452: INFO: Waiting up to 5m0s for pod "downward-api-bac68b92-314b-11ea-8b51-0242ac110005" in namespace "e2e-tests-downward-api-fhqlf" to be "success or failure"
Jan  7 12:46:31.473: INFO: Pod "downward-api-bac68b92-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.485245ms
Jan  7 12:46:33.655: INFO: Pod "downward-api-bac68b92-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202658576s
Jan  7 12:46:35.677: INFO: Pod "downward-api-bac68b92-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224103924s
Jan  7 12:46:37.734: INFO: Pod "downward-api-bac68b92-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.281085477s
Jan  7 12:46:39.755: INFO: Pod "downward-api-bac68b92-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.302271151s
Jan  7 12:46:42.411: INFO: Pod "downward-api-bac68b92-314b-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.95848692s
STEP: Saw pod success
Jan  7 12:46:42.411: INFO: Pod "downward-api-bac68b92-314b-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:46:42.421: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-bac68b92-314b-11ea-8b51-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  7 12:46:42.690: INFO: Waiting for pod downward-api-bac68b92-314b-11ea-8b51-0242ac110005 to disappear
Jan  7 12:46:42.732: INFO: Pod downward-api-bac68b92-314b-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:46:42.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fhqlf" for this suite.
Jan  7 12:46:48.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:46:48.921: INFO: namespace: e2e-tests-downward-api-fhqlf, resource: bindings, ignored listing per whitelist
Jan  7 12:46:49.032: INFO: namespace e2e-tests-downward-api-fhqlf deletion completed in 6.291887654s

• [SLOW TEST:17.895 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:46:49.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-4v72w.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-4v72w.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4v72w.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-4v72w.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-4v72w.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4v72w.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  7 12:47:05.555: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-4v72w/dns-test-c5697fc6-314b-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-c5697fc6-314b-11ea-8b51-0242ac110005)
Jan  7 12:47:05.564: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-4v72w/dns-test-c5697fc6-314b-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-c5697fc6-314b-11ea-8b51-0242ac110005)
Jan  7 12:47:05.582: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-4v72w/dns-test-c5697fc6-314b-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-c5697fc6-314b-11ea-8b51-0242ac110005)
Jan  7 12:47:05.598: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-4v72w/dns-test-c5697fc6-314b-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-c5697fc6-314b-11ea-8b51-0242ac110005)
Jan  7 12:47:05.607: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-4v72w/dns-test-c5697fc6-314b-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-c5697fc6-314b-11ea-8b51-0242ac110005)
Jan  7 12:47:05.621: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-4v72w/dns-test-c5697fc6-314b-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-c5697fc6-314b-11ea-8b51-0242ac110005)
Jan  7 12:47:05.628: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-4v72w.svc.cluster.local from pod e2e-tests-dns-4v72w/dns-test-c5697fc6-314b-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-c5697fc6-314b-11ea-8b51-0242ac110005)
Jan  7 12:47:05.633: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-4v72w/dns-test-c5697fc6-314b-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-c5697fc6-314b-11ea-8b51-0242ac110005)
Jan  7 12:47:05.644: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-4v72w/dns-test-c5697fc6-314b-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-c5697fc6-314b-11ea-8b51-0242ac110005)
Jan  7 12:47:05.648: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-4v72w/dns-test-c5697fc6-314b-11ea-8b51-0242ac110005: the server could not find the requested resource (get pods dns-test-c5697fc6-314b-11ea-8b51-0242ac110005)
Jan  7 12:47:05.648: INFO: Lookups using e2e-tests-dns-4v72w/dns-test-c5697fc6-314b-11ea-8b51-0242ac110005 failed for: [jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-4v72w.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan  7 12:47:10.859: INFO: DNS probes using e2e-tests-dns-4v72w/dns-test-c5697fc6-314b-11ea-8b51-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:47:10.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-4v72w" for this suite.
Jan  7 12:47:17.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:47:17.298: INFO: namespace: e2e-tests-dns-4v72w, resource: bindings, ignored listing per whitelist
Jan  7 12:47:17.362: INFO: namespace e2e-tests-dns-4v72w deletion completed in 6.279703037s

• [SLOW TEST:28.330 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:47:17.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jan  7 12:47:18.115: INFO: Waiting up to 5m0s for pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-dh7hk" in namespace "e2e-tests-svcaccounts-4tzzf" to be "success or failure"
Jan  7 12:47:18.187: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-dh7hk": Phase="Pending", Reason="", readiness=false. Elapsed: 71.332774ms
Jan  7 12:47:20.206: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-dh7hk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090069893s
Jan  7 12:47:22.226: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-dh7hk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110360311s
Jan  7 12:47:24.260: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-dh7hk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144077586s
Jan  7 12:47:27.042: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-dh7hk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.926332333s
Jan  7 12:47:29.061: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-dh7hk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.945394843s
Jan  7 12:47:31.083: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-dh7hk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.967019061s
Jan  7 12:47:33.236: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-dh7hk": Phase="Pending", Reason="", readiness=false. Elapsed: 15.119984336s
Jan  7 12:47:35.255: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-dh7hk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.139732853s
STEP: Saw pod success
Jan  7 12:47:35.255: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-dh7hk" satisfied condition "success or failure"
Jan  7 12:47:35.266: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-dh7hk container token-test: 
STEP: delete the pod
Jan  7 12:47:35.643: INFO: Waiting for pod pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-dh7hk to disappear
Jan  7 12:47:35.744: INFO: Pod pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-dh7hk no longer exists
STEP: Creating a pod to test consume service account root CA
Jan  7 12:47:35.775: INFO: Waiting up to 5m0s for pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-689vn" in namespace "e2e-tests-svcaccounts-4tzzf" to be "success or failure"
Jan  7 12:47:35.790: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-689vn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.738097ms
Jan  7 12:47:37.804: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-689vn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028436361s
Jan  7 12:47:39.839: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-689vn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063350629s
Jan  7 12:47:42.173: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-689vn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.397565989s
Jan  7 12:47:44.234: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-689vn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.4587595s
Jan  7 12:47:46.343: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-689vn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.567850083s
Jan  7 12:47:48.509: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-689vn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.73337674s
Jan  7 12:47:50.616: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-689vn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.840653757s
Jan  7 12:47:52.628: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-689vn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.852126983s
STEP: Saw pod success
Jan  7 12:47:52.628: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-689vn" satisfied condition "success or failure"
Jan  7 12:47:52.636: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-689vn container root-ca-test: 
STEP: delete the pod
Jan  7 12:47:52.959: INFO: Waiting for pod pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-689vn to disappear
Jan  7 12:47:52.979: INFO: Pod pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-689vn no longer exists
STEP: Creating a pod to test consume service account namespace
Jan  7 12:47:53.041: INFO: Waiting up to 5m0s for pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-vbm4m" in namespace "e2e-tests-svcaccounts-4tzzf" to be "success or failure"
Jan  7 12:47:53.062: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-vbm4m": Phase="Pending", Reason="", readiness=false. Elapsed: 20.984147ms
Jan  7 12:47:55.097: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-vbm4m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055694645s
Jan  7 12:47:57.125: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-vbm4m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083812842s
Jan  7 12:47:59.676: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-vbm4m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.634927267s
Jan  7 12:48:01.845: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-vbm4m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.803299275s
Jan  7 12:48:04.183: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-vbm4m": Phase="Pending", Reason="", readiness=false. Elapsed: 11.142163558s
Jan  7 12:48:06.195: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-vbm4m": Phase="Pending", Reason="", readiness=false. Elapsed: 13.153343035s
Jan  7 12:48:08.254: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-vbm4m": Phase="Pending", Reason="", readiness=false. Elapsed: 15.213088466s
Jan  7 12:48:10.310: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-vbm4m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.268987485s
STEP: Saw pod success
Jan  7 12:48:10.311: INFO: Pod "pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-vbm4m" satisfied condition "success or failure"
Jan  7 12:48:10.343: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-vbm4m container namespace-test: 
STEP: delete the pod
Jan  7 12:48:10.474: INFO: Waiting for pod pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-vbm4m to disappear
Jan  7 12:48:10.485: INFO: Pod pod-service-account-d69b4871-314b-11ea-8b51-0242ac110005-vbm4m no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:48:10.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-4tzzf" for this suite.
Jan  7 12:48:18.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:48:18.818: INFO: namespace: e2e-tests-svcaccounts-4tzzf, resource: bindings, ignored listing per whitelist
Jan  7 12:48:18.818: INFO: namespace e2e-tests-svcaccounts-4tzzf deletion completed in 8.320597215s

• [SLOW TEST:61.456 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:48:18.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-fb3b7a4e-314b-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  7 12:48:19.787: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fb3d65d0-314b-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-xpt77" to be "success or failure"
Jan  7 12:48:19.840: INFO: Pod "pod-projected-secrets-fb3d65d0-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 52.919134ms
Jan  7 12:48:21.881: INFO: Pod "pod-projected-secrets-fb3d65d0-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093595059s
Jan  7 12:48:23.916: INFO: Pod "pod-projected-secrets-fb3d65d0-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128973777s
Jan  7 12:48:26.099: INFO: Pod "pod-projected-secrets-fb3d65d0-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.312180003s
Jan  7 12:48:28.143: INFO: Pod "pod-projected-secrets-fb3d65d0-314b-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.35562647s
Jan  7 12:48:30.167: INFO: Pod "pod-projected-secrets-fb3d65d0-314b-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.379344404s
STEP: Saw pod success
Jan  7 12:48:30.167: INFO: Pod "pod-projected-secrets-fb3d65d0-314b-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:48:30.205: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-fb3d65d0-314b-11ea-8b51-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  7 12:48:30.251: INFO: Waiting for pod pod-projected-secrets-fb3d65d0-314b-11ea-8b51-0242ac110005 to disappear
Jan  7 12:48:30.257: INFO: Pod pod-projected-secrets-fb3d65d0-314b-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:48:30.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xpt77" for this suite.
Jan  7 12:48:36.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:48:36.434: INFO: namespace: e2e-tests-projected-xpt77, resource: bindings, ignored listing per whitelist
Jan  7 12:48:36.642: INFO: namespace e2e-tests-projected-xpt77 deletion completed in 6.375083723s

• [SLOW TEST:17.823 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:48:36.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  7 12:51:39.295: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:51:39.331: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:51:41.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:51:41.349: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:51:43.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:51:43.343: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:51:45.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:51:45.344: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:51:47.332: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:51:47.643: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:51:49.332: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:51:49.346: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:51:51.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:51:51.345: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:51:53.332: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:51:53.437: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:51:55.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:51:55.351: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:51:57.332: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:51:57.353: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:51:59.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:51:59.350: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:01.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:01.347: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:03.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:03.350: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:05.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:05.348: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:07.332: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:07.356: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:09.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:09.342: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:11.332: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:11.471: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:13.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:13.347: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:15.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:15.351: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:17.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:17.348: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:19.332: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:19.350: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:21.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:21.345: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:23.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:23.347: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:25.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:25.351: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:27.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:27.348: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:29.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:29.342: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:31.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:31.353: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:33.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:33.344: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:35.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:35.349: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:37.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:37.349: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:39.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:40.376: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:41.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:41.346: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:43.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:43.360: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:45.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:45.350: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:47.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:47.349: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:49.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:49.346: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:51.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:51.353: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:53.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:53.355: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:55.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:55.349: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:57.332: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:57.351: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:52:59.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:52:59.347: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:53:01.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:53:01.350: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:53:03.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:53:03.345: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:53:05.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:53:05.347: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:53:07.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:53:07.344: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:53:09.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:53:09.349: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:53:11.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:53:11.447: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:53:13.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:53:13.348: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:53:15.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:53:15.348: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:53:17.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:53:17.439: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:53:19.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:53:19.594: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:53:21.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:53:21.347: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:53:23.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:53:23.375: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:53:25.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:53:25.373: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:53:27.332: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:53:27.344: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:53:29.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:53:29.357: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:53:31.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:53:31.350: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  7 12:53:33.331: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  7 12:53:33.349: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:53:33.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-xhq77" for this suite.
Jan  7 12:53:57.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:53:57.685: INFO: namespace: e2e-tests-container-lifecycle-hook-xhq77, resource: bindings, ignored listing per whitelist
Jan  7 12:53:57.804: INFO: namespace e2e-tests-container-lifecycle-hook-xhq77 deletion completed in 24.445261799s

• [SLOW TEST:321.161 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:53:57.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-c4f38f65-314c-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  7 12:53:58.120: INFO: Waiting up to 5m0s for pod "pod-configmaps-c4f61398-314c-11ea-8b51-0242ac110005" in namespace "e2e-tests-configmap-tzlh5" to be "success or failure"
Jan  7 12:53:58.142: INFO: Pod "pod-configmaps-c4f61398-314c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.217733ms
Jan  7 12:54:00.169: INFO: Pod "pod-configmaps-c4f61398-314c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048561639s
Jan  7 12:54:02.201: INFO: Pod "pod-configmaps-c4f61398-314c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080664758s
Jan  7 12:54:04.215: INFO: Pod "pod-configmaps-c4f61398-314c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094739182s
Jan  7 12:54:06.240: INFO: Pod "pod-configmaps-c4f61398-314c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120439507s
Jan  7 12:54:08.255: INFO: Pod "pod-configmaps-c4f61398-314c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.135162132s
Jan  7 12:54:10.284: INFO: Pod "pod-configmaps-c4f61398-314c-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.164074153s
STEP: Saw pod success
Jan  7 12:54:10.284: INFO: Pod "pod-configmaps-c4f61398-314c-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:54:10.339: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c4f61398-314c-11ea-8b51-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  7 12:54:11.054: INFO: Waiting for pod pod-configmaps-c4f61398-314c-11ea-8b51-0242ac110005 to disappear
Jan  7 12:54:11.090: INFO: Pod pod-configmaps-c4f61398-314c-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:54:11.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-tzlh5" for this suite.
Jan  7 12:54:17.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:54:17.261: INFO: namespace: e2e-tests-configmap-tzlh5, resource: bindings, ignored listing per whitelist
Jan  7 12:54:17.353: INFO: namespace e2e-tests-configmap-tzlh5 deletion completed in 6.162291224s

• [SLOW TEST:19.549 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:54:17.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  7 12:54:17.750: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0aa258e-314c-11ea-8b51-0242ac110005" in namespace "e2e-tests-downward-api-l2nn4" to be "success or failure"
Jan  7 12:54:17.792: INFO: Pod "downwardapi-volume-d0aa258e-314c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.196587ms
Jan  7 12:54:20.000: INFO: Pod "downwardapi-volume-d0aa258e-314c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.250261448s
Jan  7 12:54:22.057: INFO: Pod "downwardapi-volume-d0aa258e-314c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307135187s
Jan  7 12:54:24.074: INFO: Pod "downwardapi-volume-d0aa258e-314c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.323556003s
Jan  7 12:54:26.398: INFO: Pod "downwardapi-volume-d0aa258e-314c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.647652528s
Jan  7 12:54:28.410: INFO: Pod "downwardapi-volume-d0aa258e-314c-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.66023359s
Jan  7 12:54:30.440: INFO: Pod "downwardapi-volume-d0aa258e-314c-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.689462941s
STEP: Saw pod success
Jan  7 12:54:30.440: INFO: Pod "downwardapi-volume-d0aa258e-314c-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:54:30.472: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d0aa258e-314c-11ea-8b51-0242ac110005 container client-container: 
STEP: delete the pod
Jan  7 12:54:30.803: INFO: Waiting for pod downwardapi-volume-d0aa258e-314c-11ea-8b51-0242ac110005 to disappear
Jan  7 12:54:30.830: INFO: Pod downwardapi-volume-d0aa258e-314c-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:54:30.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-l2nn4" for this suite.
Jan  7 12:54:38.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:54:39.262: INFO: namespace: e2e-tests-downward-api-l2nn4, resource: bindings, ignored listing per whitelist
Jan  7 12:54:39.315: INFO: namespace e2e-tests-downward-api-l2nn4 deletion completed in 8.41448712s

• [SLOW TEST:21.961 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:54:39.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  7 12:54:39.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-nnslc'
Jan  7 12:54:43.534: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  7 12:54:43.534: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan  7 12:54:43.658: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan  7 12:54:43.745: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan  7 12:54:43.970: INFO: scanned /root for discovery docs: 
Jan  7 12:54:43.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-nnslc'
Jan  7 12:55:25.990: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  7 12:55:25.990: INFO: stdout: "Created e2e-test-nginx-rc-bf6215999b205da7927e4c4e27805a54\nScaling up e2e-test-nginx-rc-bf6215999b205da7927e4c4e27805a54 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-bf6215999b205da7927e4c4e27805a54 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-bf6215999b205da7927e4c4e27805a54 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan  7 12:55:25.990: INFO: stdout: "Created e2e-test-nginx-rc-bf6215999b205da7927e4c4e27805a54\nScaling up e2e-test-nginx-rc-bf6215999b205da7927e4c4e27805a54 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-bf6215999b205da7927e4c4e27805a54 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-bf6215999b205da7927e4c4e27805a54 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan  7 12:55:25.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-nnslc'
Jan  7 12:55:26.221: INFO: stderr: ""
Jan  7 12:55:26.221: INFO: stdout: "e2e-test-nginx-rc-bf6215999b205da7927e4c4e27805a54-b78r6 "
Jan  7 12:55:26.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-bf6215999b205da7927e4c4e27805a54-b78r6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nnslc'
Jan  7 12:55:26.415: INFO: stderr: ""
Jan  7 12:55:26.415: INFO: stdout: "true"
Jan  7 12:55:26.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-bf6215999b205da7927e4c4e27805a54-b78r6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nnslc'
Jan  7 12:55:26.770: INFO: stderr: ""
Jan  7 12:55:26.770: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan  7 12:55:26.770: INFO: e2e-test-nginx-rc-bf6215999b205da7927e4c4e27805a54-b78r6 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jan  7 12:55:26.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-nnslc'
Jan  7 12:55:27.496: INFO: stderr: ""
Jan  7 12:55:27.496: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:55:27.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nnslc" for this suite.
Jan  7 12:55:51.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:55:52.079: INFO: namespace: e2e-tests-kubectl-nnslc, resource: bindings, ignored listing per whitelist
Jan  7 12:55:52.149: INFO: namespace e2e-tests-kubectl-nnslc deletion completed in 24.377466805s

• [SLOW TEST:72.834 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:55:52.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-bljvd
Jan  7 12:56:06.625: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-bljvd
STEP: checking the pod's current state and verifying that restartCount is present
Jan  7 12:56:06.722: INFO: Initial restart count of pod liveness-exec is 0
Jan  7 12:56:56.392: INFO: Restart count of pod e2e-tests-container-probe-bljvd/liveness-exec is now 1 (49.669465739s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:56:56.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-bljvd" for this suite.
Jan  7 12:57:04.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:57:04.735: INFO: namespace: e2e-tests-container-probe-bljvd, resource: bindings, ignored listing per whitelist
Jan  7 12:57:04.882: INFO: namespace e2e-tests-container-probe-bljvd deletion completed in 8.403490133s

• [SLOW TEST:72.733 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:57:04.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-348c56a8-314d-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  7 12:57:05.383: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-349dfec1-314d-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-vj72r" to be "success or failure"
Jan  7 12:57:05.417: INFO: Pod "pod-projected-secrets-349dfec1-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.034387ms
Jan  7 12:57:07.799: INFO: Pod "pod-projected-secrets-349dfec1-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.416529826s
Jan  7 12:57:09.827: INFO: Pod "pod-projected-secrets-349dfec1-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.443862804s
Jan  7 12:57:12.690: INFO: Pod "pod-projected-secrets-349dfec1-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.307490976s
Jan  7 12:57:14.728: INFO: Pod "pod-projected-secrets-349dfec1-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.345554473s
Jan  7 12:57:16.748: INFO: Pod "pod-projected-secrets-349dfec1-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.365003465s
Jan  7 12:57:18.764: INFO: Pod "pod-projected-secrets-349dfec1-314d-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.381731426s
STEP: Saw pod success
Jan  7 12:57:18.765: INFO: Pod "pod-projected-secrets-349dfec1-314d-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:57:18.769: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-349dfec1-314d-11ea-8b51-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  7 12:57:19.751: INFO: Waiting for pod pod-projected-secrets-349dfec1-314d-11ea-8b51-0242ac110005 to disappear
Jan  7 12:57:19.788: INFO: Pod pod-projected-secrets-349dfec1-314d-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:57:19.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vj72r" for this suite.
Jan  7 12:57:27.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:57:28.255: INFO: namespace: e2e-tests-projected-vj72r, resource: bindings, ignored listing per whitelist
Jan  7 12:57:28.255: INFO: namespace e2e-tests-projected-vj72r deletion completed in 8.453018155s

• [SLOW TEST:23.373 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:57:28.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  7 12:57:28.605: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4273d09c-314d-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-hswnl" to be "success or failure"
Jan  7 12:57:28.758: INFO: Pod "downwardapi-volume-4273d09c-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 152.351062ms
Jan  7 12:57:30.780: INFO: Pod "downwardapi-volume-4273d09c-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175123144s
Jan  7 12:57:32.801: INFO: Pod "downwardapi-volume-4273d09c-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.19557439s
Jan  7 12:57:34.911: INFO: Pod "downwardapi-volume-4273d09c-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.305968606s
Jan  7 12:57:36.945: INFO: Pod "downwardapi-volume-4273d09c-314d-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.339462572s
STEP: Saw pod success
Jan  7 12:57:36.945: INFO: Pod "downwardapi-volume-4273d09c-314d-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 12:57:36.956: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4273d09c-314d-11ea-8b51-0242ac110005 container client-container: 
STEP: delete the pod
Jan  7 12:57:37.101: INFO: Waiting for pod downwardapi-volume-4273d09c-314d-11ea-8b51-0242ac110005 to disappear
Jan  7 12:57:37.121: INFO: Pod downwardapi-volume-4273d09c-314d-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:57:37.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hswnl" for this suite.
Jan  7 12:57:45.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:57:45.296: INFO: namespace: e2e-tests-projected-hswnl, resource: bindings, ignored listing per whitelist
Jan  7 12:57:45.319: INFO: namespace e2e-tests-projected-hswnl deletion completed in 8.189284498s

• [SLOW TEST:17.064 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:57:45.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan  7 12:57:45.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:57:46.090: INFO: stderr: ""
Jan  7 12:57:46.090: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  7 12:57:46.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:57:46.235: INFO: stderr: ""
Jan  7 12:57:46.235: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Jan  7 12:57:51.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:57:51.431: INFO: stderr: ""
Jan  7 12:57:51.431: INFO: stdout: "update-demo-nautilus-dl55z update-demo-nautilus-rvdm9 "
Jan  7 12:57:51.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dl55z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:57:51.758: INFO: stderr: ""
Jan  7 12:57:51.759: INFO: stdout: ""
Jan  7 12:57:51.759: INFO: update-demo-nautilus-dl55z is created but not running
Jan  7 12:57:56.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:57:56.939: INFO: stderr: ""
Jan  7 12:57:56.939: INFO: stdout: "update-demo-nautilus-dl55z update-demo-nautilus-rvdm9 "
Jan  7 12:57:56.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dl55z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:57:57.044: INFO: stderr: ""
Jan  7 12:57:57.044: INFO: stdout: "true"
Jan  7 12:57:57.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dl55z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:57:57.184: INFO: stderr: ""
Jan  7 12:57:57.184: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  7 12:57:57.184: INFO: validating pod update-demo-nautilus-dl55z
Jan  7 12:57:57.208: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  7 12:57:57.208: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  7 12:57:57.208: INFO: update-demo-nautilus-dl55z is verified up and running
Jan  7 12:57:57.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rvdm9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:57:57.356: INFO: stderr: ""
Jan  7 12:57:57.356: INFO: stdout: "true"
Jan  7 12:57:57.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rvdm9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:57:57.548: INFO: stderr: ""
Jan  7 12:57:57.548: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  7 12:57:57.548: INFO: validating pod update-demo-nautilus-rvdm9
Jan  7 12:57:57.564: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  7 12:57:57.564: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  7 12:57:57.564: INFO: update-demo-nautilus-rvdm9 is verified up and running
STEP: scaling down the replication controller
Jan  7 12:57:57.566: INFO: scanned /root for discovery docs: 
Jan  7 12:57:57.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:57:59.092: INFO: stderr: ""
Jan  7 12:57:59.093: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  7 12:57:59.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:57:59.433: INFO: stderr: ""
Jan  7 12:57:59.433: INFO: stdout: "update-demo-nautilus-dl55z update-demo-nautilus-rvdm9 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  7 12:58:04.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:58:04.664: INFO: stderr: ""
Jan  7 12:58:04.664: INFO: stdout: "update-demo-nautilus-dl55z "
Jan  7 12:58:04.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dl55z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:58:04.770: INFO: stderr: ""
Jan  7 12:58:04.771: INFO: stdout: "true"
Jan  7 12:58:04.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dl55z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:58:04.885: INFO: stderr: ""
Jan  7 12:58:04.885: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  7 12:58:04.886: INFO: validating pod update-demo-nautilus-dl55z
Jan  7 12:58:04.896: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  7 12:58:04.897: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  7 12:58:04.897: INFO: update-demo-nautilus-dl55z is verified up and running
STEP: scaling up the replication controller
Jan  7 12:58:04.899: INFO: scanned /root for discovery docs: 
Jan  7 12:58:04.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:58:06.102: INFO: stderr: ""
Jan  7 12:58:06.102: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  7 12:58:06.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:58:06.358: INFO: stderr: ""
Jan  7 12:58:06.358: INFO: stdout: "update-demo-nautilus-dl55z update-demo-nautilus-g4zw4 "
Jan  7 12:58:06.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dl55z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:58:06.516: INFO: stderr: ""
Jan  7 12:58:06.517: INFO: stdout: "true"
Jan  7 12:58:06.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dl55z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:58:06.690: INFO: stderr: ""
Jan  7 12:58:06.690: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  7 12:58:06.690: INFO: validating pod update-demo-nautilus-dl55z
Jan  7 12:58:06.738: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  7 12:58:06.738: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  7 12:58:06.738: INFO: update-demo-nautilus-dl55z is verified up and running
Jan  7 12:58:06.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g4zw4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:58:06.874: INFO: stderr: ""
Jan  7 12:58:06.874: INFO: stdout: ""
Jan  7 12:58:06.874: INFO: update-demo-nautilus-g4zw4 is created but not running
Jan  7 12:58:11.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:58:12.250: INFO: stderr: ""
Jan  7 12:58:12.250: INFO: stdout: "update-demo-nautilus-dl55z update-demo-nautilus-g4zw4 "
Jan  7 12:58:12.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dl55z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:58:12.643: INFO: stderr: ""
Jan  7 12:58:12.644: INFO: stdout: "true"
Jan  7 12:58:12.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dl55z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:58:12.841: INFO: stderr: ""
Jan  7 12:58:12.841: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  7 12:58:12.841: INFO: validating pod update-demo-nautilus-dl55z
Jan  7 12:58:12.865: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  7 12:58:12.865: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  7 12:58:12.865: INFO: update-demo-nautilus-dl55z is verified up and running
Jan  7 12:58:12.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g4zw4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:58:13.060: INFO: stderr: ""
Jan  7 12:58:13.060: INFO: stdout: "true"
Jan  7 12:58:13.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g4zw4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:58:13.246: INFO: stderr: ""
Jan  7 12:58:13.246: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  7 12:58:13.246: INFO: validating pod update-demo-nautilus-g4zw4
Jan  7 12:58:13.271: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  7 12:58:13.271: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  7 12:58:13.271: INFO: update-demo-nautilus-g4zw4 is verified up and running
STEP: using delete to clean up resources
Jan  7 12:58:13.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:58:13.432: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  7 12:58:13.433: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  7 12:58:13.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-8fmd7'
Jan  7 12:58:13.624: INFO: stderr: "No resources found.\n"
Jan  7 12:58:13.624: INFO: stdout: ""
Jan  7 12:58:13.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-8fmd7 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  7 12:58:13.841: INFO: stderr: ""
Jan  7 12:58:13.842: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:58:13.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8fmd7" for this suite.
Jan  7 12:58:38.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:58:38.108: INFO: namespace: e2e-tests-kubectl-8fmd7, resource: bindings, ignored listing per whitelist
Jan  7 12:58:38.297: INFO: namespace e2e-tests-kubectl-8fmd7 deletion completed in 24.435273068s

• [SLOW TEST:52.977 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:58:38.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 12:59:36.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-ksf98" for this suite.
Jan  7 12:59:42.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 12:59:43.008: INFO: namespace: e2e-tests-container-runtime-ksf98, resource: bindings, ignored listing per whitelist
Jan  7 12:59:43.086: INFO: namespace e2e-tests-container-runtime-ksf98 deletion completed in 6.171938132s

• [SLOW TEST:64.788 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 12:59:43.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  7 12:59:43.450: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan  7 12:59:43.580: INFO: Number of nodes with available pods: 0
Jan  7 12:59:43.580: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 12:59:44.651: INFO: Number of nodes with available pods: 0
Jan  7 12:59:44.651: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 12:59:46.160: INFO: Number of nodes with available pods: 0
Jan  7 12:59:46.160: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 12:59:46.927: INFO: Number of nodes with available pods: 0
Jan  7 12:59:46.927: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 12:59:47.885: INFO: Number of nodes with available pods: 0
Jan  7 12:59:47.886: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 12:59:48.719: INFO: Number of nodes with available pods: 0
Jan  7 12:59:48.720: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 12:59:49.829: INFO: Number of nodes with available pods: 0
Jan  7 12:59:49.829: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 12:59:50.672: INFO: Number of nodes with available pods: 0
Jan  7 12:59:50.673: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 12:59:52.660: INFO: Number of nodes with available pods: 0
Jan  7 12:59:52.660: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 12:59:53.988: INFO: Number of nodes with available pods: 0
Jan  7 12:59:53.988: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 12:59:54.636: INFO: Number of nodes with available pods: 0
Jan  7 12:59:54.636: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 12:59:55.616: INFO: Number of nodes with available pods: 0
Jan  7 12:59:55.616: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 12:59:56.674: INFO: Number of nodes with available pods: 1
Jan  7 12:59:56.674: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan  7 12:59:56.813: INFO: Wrong image for pod: daemon-set-7tmdk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  7 12:59:57.871: INFO: Wrong image for pod: daemon-set-7tmdk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  7 12:59:58.864: INFO: Wrong image for pod: daemon-set-7tmdk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  7 12:59:59.885: INFO: Wrong image for pod: daemon-set-7tmdk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  7 13:00:01.019: INFO: Wrong image for pod: daemon-set-7tmdk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  7 13:00:01.869: INFO: Wrong image for pod: daemon-set-7tmdk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  7 13:00:02.923: INFO: Wrong image for pod: daemon-set-7tmdk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  7 13:00:03.893: INFO: Wrong image for pod: daemon-set-7tmdk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  7 13:00:04.870: INFO: Wrong image for pod: daemon-set-7tmdk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  7 13:00:04.870: INFO: Pod daemon-set-7tmdk is not available
Jan  7 13:00:05.871: INFO: Pod daemon-set-8tq9f is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan  7 13:00:05.948: INFO: Number of nodes with available pods: 0
Jan  7 13:00:05.948: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 13:00:06.976: INFO: Number of nodes with available pods: 0
Jan  7 13:00:06.976: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 13:00:07.984: INFO: Number of nodes with available pods: 0
Jan  7 13:00:07.984: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 13:00:08.984: INFO: Number of nodes with available pods: 0
Jan  7 13:00:08.984: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 13:00:10.011: INFO: Number of nodes with available pods: 0
Jan  7 13:00:10.012: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 13:00:10.999: INFO: Number of nodes with available pods: 0
Jan  7 13:00:10.999: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 13:00:11.980: INFO: Number of nodes with available pods: 0
Jan  7 13:00:11.980: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 13:00:13.080: INFO: Number of nodes with available pods: 0
Jan  7 13:00:13.080: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 13:00:13.981: INFO: Number of nodes with available pods: 0
Jan  7 13:00:13.981: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 13:00:15.113: INFO: Number of nodes with available pods: 0
Jan  7 13:00:15.114: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 13:00:16.012: INFO: Number of nodes with available pods: 0
Jan  7 13:00:16.012: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  7 13:00:16.986: INFO: Number of nodes with available pods: 1
Jan  7 13:00:16.986: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-nl4tx, will wait for the garbage collector to delete the pods
Jan  7 13:00:17.070: INFO: Deleting DaemonSet.extensions daemon-set took: 10.487622ms
Jan  7 13:00:17.171: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.790816ms
Jan  7 13:00:25.482: INFO: Number of nodes with available pods: 0
Jan  7 13:00:25.482: INFO: Number of running nodes: 0, number of available pods: 0
Jan  7 13:00:25.485: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-nl4tx/daemonsets","resourceVersion":"17481725"},"items":null}

Jan  7 13:00:25.488: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-nl4tx/pods","resourceVersion":"17481725"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:00:25.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-nl4tx" for this suite.
Jan  7 13:00:33.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:00:33.758: INFO: namespace: e2e-tests-daemonsets-nl4tx, resource: bindings, ignored listing per whitelist
Jan  7 13:00:33.998: INFO: namespace e2e-tests-daemonsets-nl4tx deletion completed in 8.431054578s

• [SLOW TEST:50.912 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:00:33.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  7 13:00:34.290: INFO: Waiting up to 5m0s for pod "pod-b1294c16-314d-11ea-8b51-0242ac110005" in namespace "e2e-tests-emptydir-wv282" to be "success or failure"
Jan  7 13:00:34.329: INFO: Pod "pod-b1294c16-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.875637ms
Jan  7 13:00:36.344: INFO: Pod "pod-b1294c16-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054080729s
Jan  7 13:00:38.358: INFO: Pod "pod-b1294c16-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068143046s
Jan  7 13:00:40.712: INFO: Pod "pod-b1294c16-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.421548744s
Jan  7 13:00:42.743: INFO: Pod "pod-b1294c16-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.452900391s
Jan  7 13:00:44.767: INFO: Pod "pod-b1294c16-314d-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.477243955s
STEP: Saw pod success
Jan  7 13:00:44.768: INFO: Pod "pod-b1294c16-314d-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 13:00:44.776: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b1294c16-314d-11ea-8b51-0242ac110005 container test-container: 
STEP: delete the pod
Jan  7 13:00:45.202: INFO: Waiting for pod pod-b1294c16-314d-11ea-8b51-0242ac110005 to disappear
Jan  7 13:00:45.261: INFO: Pod pod-b1294c16-314d-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:00:45.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wv282" for this suite.
Jan  7 13:00:51.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:00:51.538: INFO: namespace: e2e-tests-emptydir-wv282, resource: bindings, ignored listing per whitelist
Jan  7 13:00:51.621: INFO: namespace e2e-tests-emptydir-wv282 deletion completed in 6.260168932s

• [SLOW TEST:17.622 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:00:51.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-bbab1d07-314d-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  7 13:00:51.931: INFO: Waiting up to 5m0s for pod "pod-configmaps-bbada9d1-314d-11ea-8b51-0242ac110005" in namespace "e2e-tests-configmap-bmmd6" to be "success or failure"
Jan  7 13:00:52.078: INFO: Pod "pod-configmaps-bbada9d1-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 147.012433ms
Jan  7 13:00:54.092: INFO: Pod "pod-configmaps-bbada9d1-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160817466s
Jan  7 13:00:56.113: INFO: Pod "pod-configmaps-bbada9d1-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181708204s
Jan  7 13:00:58.295: INFO: Pod "pod-configmaps-bbada9d1-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.36425943s
Jan  7 13:01:00.356: INFO: Pod "pod-configmaps-bbada9d1-314d-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.42440222s
STEP: Saw pod success
Jan  7 13:01:00.356: INFO: Pod "pod-configmaps-bbada9d1-314d-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 13:01:00.384: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-bbada9d1-314d-11ea-8b51-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  7 13:01:00.907: INFO: Waiting for pod pod-configmaps-bbada9d1-314d-11ea-8b51-0242ac110005 to disappear
Jan  7 13:01:00.937: INFO: Pod pod-configmaps-bbada9d1-314d-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:01:00.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-bmmd6" for this suite.
Jan  7 13:01:07.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:01:07.125: INFO: namespace: e2e-tests-configmap-bmmd6, resource: bindings, ignored listing per whitelist
Jan  7 13:01:07.328: INFO: namespace e2e-tests-configmap-bmmd6 deletion completed in 6.379067021s

• [SLOW TEST:15.707 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:01:07.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  7 13:01:26.098: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  7 13:01:26.215: INFO: Pod pod-with-poststart-http-hook still exists
Jan  7 13:01:28.216: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  7 13:01:28.243: INFO: Pod pod-with-poststart-http-hook still exists
Jan  7 13:01:30.216: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  7 13:01:30.343: INFO: Pod pod-with-poststart-http-hook still exists
Jan  7 13:01:32.216: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  7 13:01:32.237: INFO: Pod pod-with-poststart-http-hook still exists
Jan  7 13:01:34.216: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  7 13:01:34.233: INFO: Pod pod-with-poststart-http-hook still exists
Jan  7 13:01:36.215: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  7 13:01:36.240: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:01:36.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-njwg5" for this suite.
Jan  7 13:02:00.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:02:00.769: INFO: namespace: e2e-tests-container-lifecycle-hook-njwg5, resource: bindings, ignored listing per whitelist
Jan  7 13:02:00.787: INFO: namespace e2e-tests-container-lifecycle-hook-njwg5 deletion completed in 24.538766786s

• [SLOW TEST:53.458 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:02:00.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  7 13:02:01.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-j96rh'
Jan  7 13:02:01.628: INFO: stderr: ""
Jan  7 13:02:01.628: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan  7 13:02:16.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-j96rh -o json'
Jan  7 13:02:16.847: INFO: stderr: ""
Jan  7 13:02:16.847: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-07T13:02:01Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-j96rh\",\n        \"resourceVersion\": \"17481970\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-j96rh/pods/e2e-test-nginx-pod\",\n        \"uid\": \"e533a17f-314d-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-rlc8k\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-rlc8k\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-rlc8k\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-07T13:02:01Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-07T13:02:11Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-07T13:02:11Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-07T13:02:01Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://b0f05c1268a7816c97f8a90534227fa43fb4262b58d23dc2fa6a8699e57d9f4e\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-07T13:02:11Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-07T13:02:01Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan  7 13:02:16.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-j96rh'
Jan  7 13:02:17.296: INFO: stderr: ""
Jan  7 13:02:17.297: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Jan  7 13:02:17.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-j96rh'
Jan  7 13:02:25.583: INFO: stderr: ""
Jan  7 13:02:25.584: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:02:25.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-j96rh" for this suite.
Jan  7 13:02:33.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:02:33.719: INFO: namespace: e2e-tests-kubectl-j96rh, resource: bindings, ignored listing per whitelist
Jan  7 13:02:33.891: INFO: namespace e2e-tests-kubectl-j96rh deletion completed in 8.288513547s

• [SLOW TEST:33.104 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:02:33.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jan  7 13:02:34.454: INFO: Waiting up to 5m0s for pod "var-expansion-f8c75ee8-314d-11ea-8b51-0242ac110005" in namespace "e2e-tests-var-expansion-2c7t8" to be "success or failure"
Jan  7 13:02:34.623: INFO: Pod "var-expansion-f8c75ee8-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 167.932327ms
Jan  7 13:02:36.642: INFO: Pod "var-expansion-f8c75ee8-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186828936s
Jan  7 13:02:38.658: INFO: Pod "var-expansion-f8c75ee8-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.203154186s
Jan  7 13:02:40.675: INFO: Pod "var-expansion-f8c75ee8-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.220468691s
Jan  7 13:02:42.729: INFO: Pod "var-expansion-f8c75ee8-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.273768247s
Jan  7 13:02:44.747: INFO: Pod "var-expansion-f8c75ee8-314d-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.292150706s
Jan  7 13:02:46.757: INFO: Pod "var-expansion-f8c75ee8-314d-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.302600356s
STEP: Saw pod success
Jan  7 13:02:46.758: INFO: Pod "var-expansion-f8c75ee8-314d-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 13:02:46.761: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-f8c75ee8-314d-11ea-8b51-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  7 13:02:48.149: INFO: Waiting for pod var-expansion-f8c75ee8-314d-11ea-8b51-0242ac110005 to disappear
Jan  7 13:02:48.647: INFO: Pod var-expansion-f8c75ee8-314d-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:02:48.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-2c7t8" for this suite.
Jan  7 13:02:54.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:02:54.998: INFO: namespace: e2e-tests-var-expansion-2c7t8, resource: bindings, ignored listing per whitelist
Jan  7 13:02:55.076: INFO: namespace e2e-tests-var-expansion-2c7t8 deletion completed in 6.407991096s

• [SLOW TEST:21.184 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:02:55.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0107 13:03:25.903576       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  7 13:03:25.903: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:03:25.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-4qv2d" for this suite.
Jan  7 13:03:33.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:03:34.110: INFO: namespace: e2e-tests-gc-4qv2d, resource: bindings, ignored listing per whitelist
Jan  7 13:03:34.182: INFO: namespace e2e-tests-gc-4qv2d deletion completed in 8.272569635s

• [SLOW TEST:39.105 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:03:34.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  7 13:03:58.347: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  7 13:03:58.363: INFO: Pod pod-with-prestop-http-hook still exists
Jan  7 13:04:00.364: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  7 13:04:00.384: INFO: Pod pod-with-prestop-http-hook still exists
Jan  7 13:04:02.364: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  7 13:04:02.379: INFO: Pod pod-with-prestop-http-hook still exists
Jan  7 13:04:04.364: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  7 13:04:04.381: INFO: Pod pod-with-prestop-http-hook still exists
Jan  7 13:04:06.364: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  7 13:04:06.390: INFO: Pod pod-with-prestop-http-hook still exists
Jan  7 13:04:08.364: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  7 13:04:08.376: INFO: Pod pod-with-prestop-http-hook still exists
Jan  7 13:04:10.364: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  7 13:04:10.379: INFO: Pod pod-with-prestop-http-hook still exists
Jan  7 13:04:12.364: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  7 13:04:12.379: INFO: Pod pod-with-prestop-http-hook still exists
Jan  7 13:04:14.364: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  7 13:04:14.382: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:04:14.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-jfx2m" for this suite.
Jan  7 13:04:38.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:04:38.778: INFO: namespace: e2e-tests-container-lifecycle-hook-jfx2m, resource: bindings, ignored listing per whitelist
Jan  7 13:04:38.787: INFO: namespace e2e-tests-container-lifecycle-hook-jfx2m deletion completed in 24.367916071s

• [SLOW TEST:64.605 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:04:38.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-43079603-314e-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  7 13:04:39.259: INFO: Waiting up to 5m0s for pod "pod-secrets-4329e85c-314e-11ea-8b51-0242ac110005" in namespace "e2e-tests-secrets-8gzml" to be "success or failure"
Jan  7 13:04:39.357: INFO: Pod "pod-secrets-4329e85c-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 97.787875ms
Jan  7 13:04:41.381: INFO: Pod "pod-secrets-4329e85c-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121501463s
Jan  7 13:04:43.436: INFO: Pod "pod-secrets-4329e85c-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176016487s
Jan  7 13:04:45.458: INFO: Pod "pod-secrets-4329e85c-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.198784982s
Jan  7 13:04:47.502: INFO: Pod "pod-secrets-4329e85c-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.242032452s
Jan  7 13:04:49.521: INFO: Pod "pod-secrets-4329e85c-314e-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.261647327s
STEP: Saw pod success
Jan  7 13:04:49.521: INFO: Pod "pod-secrets-4329e85c-314e-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 13:04:49.527: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4329e85c-314e-11ea-8b51-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  7 13:04:49.727: INFO: Waiting for pod pod-secrets-4329e85c-314e-11ea-8b51-0242ac110005 to disappear
Jan  7 13:04:49.746: INFO: Pod pod-secrets-4329e85c-314e-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:04:49.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-8gzml" for this suite.
Jan  7 13:04:57.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:04:57.982: INFO: namespace: e2e-tests-secrets-8gzml, resource: bindings, ignored listing per whitelist
Jan  7 13:04:58.012: INFO: namespace e2e-tests-secrets-8gzml deletion completed in 8.256782675s
STEP: Destroying namespace "e2e-tests-secret-namespace-46tsx" for this suite.
Jan  7 13:05:04.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:05:04.177: INFO: namespace: e2e-tests-secret-namespace-46tsx, resource: bindings, ignored listing per whitelist
Jan  7 13:05:04.278: INFO: namespace e2e-tests-secret-namespace-46tsx deletion completed in 6.265423759s

• [SLOW TEST:25.490 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:05:04.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-klc8l/configmap-test-525d5f63-314e-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  7 13:05:04.777: INFO: Waiting up to 5m0s for pod "pod-configmaps-526019b1-314e-11ea-8b51-0242ac110005" in namespace "e2e-tests-configmap-klc8l" to be "success or failure"
Jan  7 13:05:05.007: INFO: Pod "pod-configmaps-526019b1-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 229.838412ms
Jan  7 13:05:07.019: INFO: Pod "pod-configmaps-526019b1-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.24259702s
Jan  7 13:05:09.453: INFO: Pod "pod-configmaps-526019b1-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.676190679s
Jan  7 13:05:11.463: INFO: Pod "pod-configmaps-526019b1-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.686496306s
Jan  7 13:05:13.875: INFO: Pod "pod-configmaps-526019b1-314e-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.09818823s
STEP: Saw pod success
Jan  7 13:05:13.875: INFO: Pod "pod-configmaps-526019b1-314e-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 13:05:13.883: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-526019b1-314e-11ea-8b51-0242ac110005 container env-test: 
STEP: delete the pod
Jan  7 13:05:14.371: INFO: Waiting for pod pod-configmaps-526019b1-314e-11ea-8b51-0242ac110005 to disappear
Jan  7 13:05:14.423: INFO: Pod pod-configmaps-526019b1-314e-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:05:14.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-klc8l" for this suite.
Jan  7 13:05:20.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:05:20.717: INFO: namespace: e2e-tests-configmap-klc8l, resource: bindings, ignored listing per whitelist
Jan  7 13:05:20.862: INFO: namespace e2e-tests-configmap-klc8l deletion completed in 6.422278592s

• [SLOW TEST:16.584 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:05:20.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  7 13:05:21.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:05:29.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-w6s5h" for this suite.
Jan  7 13:06:13.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:06:13.537: INFO: namespace: e2e-tests-pods-w6s5h, resource: bindings, ignored listing per whitelist
Jan  7 13:06:13.634: INFO: namespace e2e-tests-pods-w6s5h deletion completed in 44.243622365s

• [SLOW TEST:52.771 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:06:13.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  7 13:06:14.053: INFO: Waiting up to 5m0s for pod "pod-7ba51e7f-314e-11ea-8b51-0242ac110005" in namespace "e2e-tests-emptydir-7lqnh" to be "success or failure"
Jan  7 13:06:14.198: INFO: Pod "pod-7ba51e7f-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 144.445761ms
Jan  7 13:06:16.251: INFO: Pod "pod-7ba51e7f-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197119106s
Jan  7 13:06:18.271: INFO: Pod "pod-7ba51e7f-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.217213503s
Jan  7 13:06:20.331: INFO: Pod "pod-7ba51e7f-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.277529381s
Jan  7 13:06:22.588: INFO: Pod "pod-7ba51e7f-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.534783398s
Jan  7 13:06:24.630: INFO: Pod "pod-7ba51e7f-314e-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.576317122s
STEP: Saw pod success
Jan  7 13:06:24.630: INFO: Pod "pod-7ba51e7f-314e-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 13:06:24.682: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7ba51e7f-314e-11ea-8b51-0242ac110005 container test-container: 
STEP: delete the pod
Jan  7 13:06:24.945: INFO: Waiting for pod pod-7ba51e7f-314e-11ea-8b51-0242ac110005 to disappear
Jan  7 13:06:25.052: INFO: Pod pod-7ba51e7f-314e-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:06:25.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-7lqnh" for this suite.
Jan  7 13:06:31.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:06:31.373: INFO: namespace: e2e-tests-emptydir-7lqnh, resource: bindings, ignored listing per whitelist
Jan  7 13:06:31.616: INFO: namespace e2e-tests-emptydir-7lqnh deletion completed in 6.53485822s

• [SLOW TEST:17.981 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:06:31.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan  7 13:06:40.342: INFO: Pod pod-hostip-864ceaa1-314e-11ea-8b51-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:06:40.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-x7vj9" for this suite.
Jan  7 13:07:04.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:07:04.593: INFO: namespace: e2e-tests-pods-x7vj9, resource: bindings, ignored listing per whitelist
Jan  7 13:07:04.689: INFO: namespace e2e-tests-pods-x7vj9 deletion completed in 24.332133444s

• [SLOW TEST:33.073 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:07:04.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  7 13:07:05.262: INFO: Waiting up to 5m0s for pod "pod-9a31146a-314e-11ea-8b51-0242ac110005" in namespace "e2e-tests-emptydir-2lgl9" to be "success or failure"
Jan  7 13:07:05.277: INFO: Pod "pod-9a31146a-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.59266ms
Jan  7 13:07:07.517: INFO: Pod "pod-9a31146a-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254888524s
Jan  7 13:07:09.551: INFO: Pod "pod-9a31146a-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.288776043s
Jan  7 13:07:11.564: INFO: Pod "pod-9a31146a-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.301973839s
Jan  7 13:07:14.076: INFO: Pod "pod-9a31146a-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.814029364s
Jan  7 13:07:16.102: INFO: Pod "pod-9a31146a-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.839287132s
Jan  7 13:07:18.121: INFO: Pod "pod-9a31146a-314e-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.858154663s
Jan  7 13:07:20.140: INFO: Pod "pod-9a31146a-314e-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.877549089s
STEP: Saw pod success
Jan  7 13:07:20.140: INFO: Pod "pod-9a31146a-314e-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 13:07:20.146: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9a31146a-314e-11ea-8b51-0242ac110005 container test-container: 
STEP: delete the pod
Jan  7 13:07:21.753: INFO: Waiting for pod pod-9a31146a-314e-11ea-8b51-0242ac110005 to disappear
Jan  7 13:07:22.179: INFO: Pod pod-9a31146a-314e-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:07:22.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2lgl9" for this suite.
Jan  7 13:07:28.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:07:28.780: INFO: namespace: e2e-tests-emptydir-2lgl9, resource: bindings, ignored listing per whitelist
Jan  7 13:07:28.940: INFO: namespace e2e-tests-emptydir-2lgl9 deletion completed in 6.733615803s

• [SLOW TEST:24.250 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:07:28.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:07:41.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-tddhm" for this suite.
Jan  7 13:07:48.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:07:48.173: INFO: namespace: e2e-tests-emptydir-wrapper-tddhm, resource: bindings, ignored listing per whitelist
Jan  7 13:07:48.280: INFO: namespace e2e-tests-emptydir-wrapper-tddhm deletion completed in 6.418760305s

• [SLOW TEST:19.340 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:07:48.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jan  7 13:07:48.747: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix417639992/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:07:48.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-b9z4s" for this suite.
Jan  7 13:07:54.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:07:55.047: INFO: namespace: e2e-tests-kubectl-b9z4s, resource: bindings, ignored listing per whitelist
Jan  7 13:07:55.147: INFO: namespace e2e-tests-kubectl-b9z4s deletion completed in 6.215740299s

• [SLOW TEST:6.868 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:07:55.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-j8465
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  7 13:07:55.543: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  7 13:08:35.930: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-j8465 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  7 13:08:35.930: INFO: >>> kubeConfig: /root/.kube/config
I0107 13:08:36.003653       8 log.go:172] (0xc001028790) (0xc001e7b040) Create stream
I0107 13:08:36.003837       8 log.go:172] (0xc001028790) (0xc001e7b040) Stream added, broadcasting: 1
I0107 13:08:36.009602       8 log.go:172] (0xc001028790) Reply frame received for 1
I0107 13:08:36.009749       8 log.go:172] (0xc001028790) (0xc00236f5e0) Create stream
I0107 13:08:36.009766       8 log.go:172] (0xc001028790) (0xc00236f5e0) Stream added, broadcasting: 3
I0107 13:08:36.011233       8 log.go:172] (0xc001028790) Reply frame received for 3
I0107 13:08:36.011267       8 log.go:172] (0xc001028790) (0xc00236f680) Create stream
I0107 13:08:36.011282       8 log.go:172] (0xc001028790) (0xc00236f680) Stream added, broadcasting: 5
I0107 13:08:36.012305       8 log.go:172] (0xc001028790) Reply frame received for 5
I0107 13:08:36.312815       8 log.go:172] (0xc001028790) Data frame received for 3
I0107 13:08:36.313057       8 log.go:172] (0xc00236f5e0) (3) Data frame handling
I0107 13:08:36.313111       8 log.go:172] (0xc00236f5e0) (3) Data frame sent
I0107 13:08:36.617226       8 log.go:172] (0xc001028790) Data frame received for 1
I0107 13:08:36.617393       8 log.go:172] (0xc001028790) (0xc00236f5e0) Stream removed, broadcasting: 3
I0107 13:08:36.617492       8 log.go:172] (0xc001e7b040) (1) Data frame handling
I0107 13:08:36.617535       8 log.go:172] (0xc001028790) (0xc00236f680) Stream removed, broadcasting: 5
I0107 13:08:36.617553       8 log.go:172] (0xc001e7b040) (1) Data frame sent
I0107 13:08:36.617558       8 log.go:172] (0xc001028790) (0xc001e7b040) Stream removed, broadcasting: 1
I0107 13:08:36.617592       8 log.go:172] (0xc001028790) Go away received
I0107 13:08:36.617964       8 log.go:172] (0xc001028790) (0xc001e7b040) Stream removed, broadcasting: 1
I0107 13:08:36.617993       8 log.go:172] (0xc001028790) (0xc00236f5e0) Stream removed, broadcasting: 3
I0107 13:08:36.618016       8 log.go:172] (0xc001028790) (0xc00236f680) Stream removed, broadcasting: 5
Jan  7 13:08:36.618: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:08:36.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-j8465" for this suite.
Jan  7 13:09:02.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:09:02.980: INFO: namespace: e2e-tests-pod-network-test-j8465, resource: bindings, ignored listing per whitelist
Jan  7 13:09:03.011: INFO: namespace e2e-tests-pod-network-test-j8465 deletion completed in 26.344376516s

• [SLOW TEST:67.862 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:09:03.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan  7 13:09:03.218: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-k2mkk" to be "success or failure"
Jan  7 13:09:03.232: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.19303ms
Jan  7 13:09:05.246: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026956787s
Jan  7 13:09:07.260: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041767107s
Jan  7 13:09:09.305: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086823283s
Jan  7 13:09:11.336: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117305445s
Jan  7 13:09:13.552: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.333359682s
Jan  7 13:09:16.119: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.900592507s
Jan  7 13:09:18.137: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.918122365s
Jan  7 13:09:20.147: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.928151559s
STEP: Saw pod success
Jan  7 13:09:20.147: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan  7 13:09:20.155: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan  7 13:09:21.310: INFO: Waiting for pod pod-host-path-test to disappear
Jan  7 13:09:21.873: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:09:21.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-k2mkk" for this suite.
Jan  7 13:09:28.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:09:28.269: INFO: namespace: e2e-tests-hostpath-k2mkk, resource: bindings, ignored listing per whitelist
Jan  7 13:09:28.346: INFO: namespace e2e-tests-hostpath-k2mkk deletion completed in 6.446821303s

• [SLOW TEST:25.334 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:09:28.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jan  7 13:09:28.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan  7 13:09:28.946: INFO: stderr: ""
Jan  7 13:09:28.946: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:09:28.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-h2w8q" for this suite.
Jan  7 13:09:34.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:09:35.034: INFO: namespace: e2e-tests-kubectl-h2w8q, resource: bindings, ignored listing per whitelist
Jan  7 13:09:35.134: INFO: namespace e2e-tests-kubectl-h2w8q deletion completed in 6.1795872s

• [SLOW TEST:6.787 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:09:35.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  7 13:09:46.183: INFO: Successfully updated pod "labelsupdatef3c2e9d4-314e-11ea-8b51-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:09:48.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-plwkt" for this suite.
Jan  7 13:10:12.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:10:12.668: INFO: namespace: e2e-tests-downward-api-plwkt, resource: bindings, ignored listing per whitelist
Jan  7 13:10:12.803: INFO: namespace e2e-tests-downward-api-plwkt deletion completed in 24.368576333s

• [SLOW TEST:37.668 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:10:12.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-d4qf8
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Jan  7 13:10:13.150: INFO: Found 0 stateful pods, waiting for 3
Jan  7 13:10:23.216: INFO: Found 1 stateful pods, waiting for 3
Jan  7 13:10:33.915: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 13:10:33.915: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 13:10:33.915: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  7 13:10:43.167: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 13:10:43.167: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 13:10:43.167: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan  7 13:10:43.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-d4qf8 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  7 13:10:44.122: INFO: stderr: "I0107 13:10:43.420960    3755 log.go:172] (0xc0006d0370) (0xc000669400) Create stream\nI0107 13:10:43.421450    3755 log.go:172] (0xc0006d0370) (0xc000669400) Stream added, broadcasting: 1\nI0107 13:10:43.428851    3755 log.go:172] (0xc0006d0370) Reply frame received for 1\nI0107 13:10:43.428960    3755 log.go:172] (0xc0006d0370) (0xc000724000) Create stream\nI0107 13:10:43.428992    3755 log.go:172] (0xc0006d0370) (0xc000724000) Stream added, broadcasting: 3\nI0107 13:10:43.430199    3755 log.go:172] (0xc0006d0370) Reply frame received for 3\nI0107 13:10:43.430231    3755 log.go:172] (0xc0006d0370) (0xc0007240a0) Create stream\nI0107 13:10:43.430241    3755 log.go:172] (0xc0006d0370) (0xc0007240a0) Stream added, broadcasting: 5\nI0107 13:10:43.431315    3755 log.go:172] (0xc0006d0370) Reply frame received for 5\nI0107 13:10:43.728813    3755 log.go:172] (0xc0006d0370) Data frame received for 3\nI0107 13:10:43.729090    3755 log.go:172] (0xc000724000) (3) Data frame handling\nI0107 13:10:43.729147    3755 log.go:172] (0xc000724000) (3) Data frame sent\nI0107 13:10:44.101688    3755 log.go:172] (0xc0006d0370) Data frame received for 1\nI0107 13:10:44.101875    3755 log.go:172] (0xc0006d0370) (0xc0007240a0) Stream removed, broadcasting: 5\nI0107 13:10:44.102063    3755 log.go:172] (0xc000669400) (1) Data frame handling\nI0107 13:10:44.102110    3755 log.go:172] (0xc000669400) (1) Data frame sent\nI0107 13:10:44.102167    3755 log.go:172] (0xc0006d0370) (0xc000724000) Stream removed, broadcasting: 3\nI0107 13:10:44.102253    3755 log.go:172] (0xc0006d0370) (0xc000669400) Stream removed, broadcasting: 1\nI0107 13:10:44.102295    3755 log.go:172] (0xc0006d0370) Go away received\nI0107 13:10:44.103405    3755 log.go:172] (0xc0006d0370) (0xc000669400) Stream removed, broadcasting: 1\nI0107 13:10:44.103431    3755 log.go:172] (0xc0006d0370) (0xc000724000) Stream removed, broadcasting: 3\nI0107 13:10:44.103450    3755 log.go:172] (0xc0006d0370) (0xc0007240a0) Stream removed, broadcasting: 5\n"
Jan  7 13:10:44.122: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  7 13:10:44.123: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  7 13:10:44.230: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan  7 13:10:54.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-d4qf8 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  7 13:10:55.016: INFO: stderr: "I0107 13:10:54.433778    3777 log.go:172] (0xc000720370) (0xc0007ac640) Create stream\nI0107 13:10:54.434119    3777 log.go:172] (0xc000720370) (0xc0007ac640) Stream added, broadcasting: 1\nI0107 13:10:54.440557    3777 log.go:172] (0xc000720370) Reply frame received for 1\nI0107 13:10:54.440609    3777 log.go:172] (0xc000720370) (0xc0005c2d20) Create stream\nI0107 13:10:54.440622    3777 log.go:172] (0xc000720370) (0xc0005c2d20) Stream added, broadcasting: 3\nI0107 13:10:54.441913    3777 log.go:172] (0xc000720370) Reply frame received for 3\nI0107 13:10:54.441950    3777 log.go:172] (0xc000720370) (0xc0005c2e60) Create stream\nI0107 13:10:54.441968    3777 log.go:172] (0xc000720370) (0xc0005c2e60) Stream added, broadcasting: 5\nI0107 13:10:54.442657    3777 log.go:172] (0xc000720370) Reply frame received for 5\nI0107 13:10:54.661833    3777 log.go:172] (0xc000720370) Data frame received for 3\nI0107 13:10:54.662013    3777 log.go:172] (0xc0005c2d20) (3) Data frame handling\nI0107 13:10:54.662050    3777 log.go:172] (0xc0005c2d20) (3) Data frame sent\nI0107 13:10:55.000857    3777 log.go:172] (0xc000720370) Data frame received for 1\nI0107 13:10:55.001005    3777 log.go:172] (0xc0007ac640) (1) Data frame handling\nI0107 13:10:55.001053    3777 log.go:172] (0xc0007ac640) (1) Data frame sent\nI0107 13:10:55.001083    3777 log.go:172] (0xc000720370) (0xc0007ac640) Stream removed, broadcasting: 1\nI0107 13:10:55.002142    3777 log.go:172] (0xc000720370) (0xc0005c2e60) Stream removed, broadcasting: 5\nI0107 13:10:55.002273    3777 log.go:172] (0xc000720370) (0xc0005c2d20) Stream removed, broadcasting: 3\nI0107 13:10:55.002358    3777 log.go:172] (0xc000720370) (0xc0007ac640) Stream removed, broadcasting: 1\nI0107 13:10:55.002380    3777 log.go:172] (0xc000720370) (0xc0005c2d20) Stream removed, broadcasting: 3\nI0107 13:10:55.002395    3777 log.go:172] (0xc000720370) (0xc0005c2e60) Stream removed, broadcasting: 5\n"
Jan  7 13:10:55.016: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  7 13:10:55.016: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  7 13:10:55.331: INFO: Waiting for StatefulSet e2e-tests-statefulset-d4qf8/ss2 to complete update
Jan  7 13:10:55.331: INFO: Waiting for Pod e2e-tests-statefulset-d4qf8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 13:10:55.331: INFO: Waiting for Pod e2e-tests-statefulset-d4qf8/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 13:10:55.331: INFO: Waiting for Pod e2e-tests-statefulset-d4qf8/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 13:11:05.750: INFO: Waiting for StatefulSet e2e-tests-statefulset-d4qf8/ss2 to complete update
Jan  7 13:11:05.750: INFO: Waiting for Pod e2e-tests-statefulset-d4qf8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 13:11:05.750: INFO: Waiting for Pod e2e-tests-statefulset-d4qf8/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 13:11:15.397: INFO: Waiting for StatefulSet e2e-tests-statefulset-d4qf8/ss2 to complete update
Jan  7 13:11:15.397: INFO: Waiting for Pod e2e-tests-statefulset-d4qf8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 13:11:15.397: INFO: Waiting for Pod e2e-tests-statefulset-d4qf8/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 13:11:25.408: INFO: Waiting for StatefulSet e2e-tests-statefulset-d4qf8/ss2 to complete update
Jan  7 13:11:25.408: INFO: Waiting for Pod e2e-tests-statefulset-d4qf8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 13:11:35.892: INFO: Waiting for StatefulSet e2e-tests-statefulset-d4qf8/ss2 to complete update
Jan  7 13:11:35.892: INFO: Waiting for Pod e2e-tests-statefulset-d4qf8/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  7 13:11:45.353: INFO: Waiting for StatefulSet e2e-tests-statefulset-d4qf8/ss2 to complete update
STEP: Rolling back to a previous revision
Jan  7 13:11:55.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-d4qf8 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  7 13:11:56.234: INFO: stderr: "I0107 13:11:55.776366    3800 log.go:172] (0xc0004e2420) (0xc0005c1400) Create stream\nI0107 13:11:55.776652    3800 log.go:172] (0xc0004e2420) (0xc0005c1400) Stream added, broadcasting: 1\nI0107 13:11:55.785261    3800 log.go:172] (0xc0004e2420) Reply frame received for 1\nI0107 13:11:55.785473    3800 log.go:172] (0xc0004e2420) (0xc00030c000) Create stream\nI0107 13:11:55.785497    3800 log.go:172] (0xc0004e2420) (0xc00030c000) Stream added, broadcasting: 3\nI0107 13:11:55.787975    3800 log.go:172] (0xc0004e2420) Reply frame received for 3\nI0107 13:11:55.788029    3800 log.go:172] (0xc0004e2420) (0xc000318000) Create stream\nI0107 13:11:55.788046    3800 log.go:172] (0xc0004e2420) (0xc000318000) Stream added, broadcasting: 5\nI0107 13:11:55.789554    3800 log.go:172] (0xc0004e2420) Reply frame received for 5\nI0107 13:11:56.088860    3800 log.go:172] (0xc0004e2420) Data frame received for 3\nI0107 13:11:56.088989    3800 log.go:172] (0xc00030c000) (3) Data frame handling\nI0107 13:11:56.089013    3800 log.go:172] (0xc00030c000) (3) Data frame sent\nI0107 13:11:56.215398    3800 log.go:172] (0xc0004e2420) Data frame received for 1\nI0107 13:11:56.216073    3800 log.go:172] (0xc0004e2420) (0xc00030c000) Stream removed, broadcasting: 3\nI0107 13:11:56.216286    3800 log.go:172] (0xc0005c1400) (1) Data frame handling\nI0107 13:11:56.216536    3800 log.go:172] (0xc0005c1400) (1) Data frame sent\nI0107 13:11:56.216855    3800 log.go:172] (0xc0004e2420) (0xc000318000) Stream removed, broadcasting: 5\nI0107 13:11:56.217117    3800 log.go:172] (0xc0004e2420) (0xc0005c1400) Stream removed, broadcasting: 1\nI0107 13:11:56.217233    3800 log.go:172] (0xc0004e2420) Go away received\nI0107 13:11:56.219235    3800 log.go:172] (0xc0004e2420) (0xc0005c1400) Stream removed, broadcasting: 1\nI0107 13:11:56.219407    3800 log.go:172] (0xc0004e2420) (0xc00030c000) Stream removed, broadcasting: 3\nI0107 13:11:56.219422    3800 log.go:172] (0xc0004e2420) (0xc000318000) Stream removed, broadcasting: 5\n"
Jan  7 13:11:56.235: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  7 13:11:56.235: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  7 13:12:06.320: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan  7 13:12:16.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-d4qf8 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  7 13:12:17.132: INFO: stderr: "I0107 13:12:16.701404    3822 log.go:172] (0xc0008202c0) (0xc0005ab2c0) Create stream\nI0107 13:12:16.701751    3822 log.go:172] (0xc0008202c0) (0xc0005ab2c0) Stream added, broadcasting: 1\nI0107 13:12:16.708015    3822 log.go:172] (0xc0008202c0) Reply frame received for 1\nI0107 13:12:16.708063    3822 log.go:172] (0xc0008202c0) (0xc000504000) Create stream\nI0107 13:12:16.708076    3822 log.go:172] (0xc0008202c0) (0xc000504000) Stream added, broadcasting: 3\nI0107 13:12:16.709286    3822 log.go:172] (0xc0008202c0) Reply frame received for 3\nI0107 13:12:16.709314    3822 log.go:172] (0xc0008202c0) (0xc0005ab360) Create stream\nI0107 13:12:16.709323    3822 log.go:172] (0xc0008202c0) (0xc0005ab360) Stream added, broadcasting: 5\nI0107 13:12:16.710455    3822 log.go:172] (0xc0008202c0) Reply frame received for 5\nI0107 13:12:16.808531    3822 log.go:172] (0xc0008202c0) Data frame received for 3\nI0107 13:12:16.808713    3822 log.go:172] (0xc000504000) (3) Data frame handling\nI0107 13:12:16.808787    3822 log.go:172] (0xc000504000) (3) Data frame sent\nI0107 13:12:17.106886    3822 log.go:172] (0xc0008202c0) Data frame received for 1\nI0107 13:12:17.107694    3822 log.go:172] (0xc0005ab2c0) (1) Data frame handling\nI0107 13:12:17.107783    3822 log.go:172] (0xc0005ab2c0) (1) Data frame sent\nI0107 13:12:17.112202    3822 log.go:172] (0xc0008202c0) (0xc0005ab360) Stream removed, broadcasting: 5\nI0107 13:12:17.112354    3822 log.go:172] (0xc0008202c0) (0xc0005ab2c0) Stream removed, broadcasting: 1\nI0107 13:12:17.112470    3822 log.go:172] (0xc0008202c0) (0xc000504000) Stream removed, broadcasting: 3\nI0107 13:12:17.112907    3822 log.go:172] (0xc0008202c0) Go away received\nI0107 13:12:17.113469    3822 log.go:172] (0xc0008202c0) (0xc0005ab2c0) Stream removed, broadcasting: 1\nI0107 13:12:17.113522    3822 log.go:172] (0xc0008202c0) (0xc000504000) Stream removed, broadcasting: 3\nI0107 13:12:17.113541    3822 log.go:172] (0xc0008202c0) (0xc0005ab360) Stream removed, broadcasting: 5\n"
Jan  7 13:12:17.132: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  7 13:12:17.132: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  7 13:12:17.368: INFO: Waiting for StatefulSet e2e-tests-statefulset-d4qf8/ss2 to complete update
Jan  7 13:12:17.368: INFO: Waiting for Pod e2e-tests-statefulset-d4qf8/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  7 13:12:17.368: INFO: Waiting for Pod e2e-tests-statefulset-d4qf8/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  7 13:12:17.368: INFO: Waiting for Pod e2e-tests-statefulset-d4qf8/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  7 13:12:27.593: INFO: Waiting for StatefulSet e2e-tests-statefulset-d4qf8/ss2 to complete update
Jan  7 13:12:27.594: INFO: Waiting for Pod e2e-tests-statefulset-d4qf8/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  7 13:12:27.594: INFO: Waiting for Pod e2e-tests-statefulset-d4qf8/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  7 13:12:37.383: INFO: Waiting for StatefulSet e2e-tests-statefulset-d4qf8/ss2 to complete update
Jan  7 13:12:37.383: INFO: Waiting for Pod e2e-tests-statefulset-d4qf8/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  7 13:12:37.383: INFO: Waiting for Pod e2e-tests-statefulset-d4qf8/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  7 13:12:47.429: INFO: Waiting for StatefulSet e2e-tests-statefulset-d4qf8/ss2 to complete update
Jan  7 13:12:47.429: INFO: Waiting for Pod e2e-tests-statefulset-d4qf8/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  7 13:12:57.390: INFO: Waiting for StatefulSet e2e-tests-statefulset-d4qf8/ss2 to complete update
Jan  7 13:12:57.391: INFO: Waiting for Pod e2e-tests-statefulset-d4qf8/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  7 13:13:07.667: INFO: Waiting for StatefulSet e2e-tests-statefulset-d4qf8/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  7 13:13:17.425: INFO: Deleting all statefulset in ns e2e-tests-statefulset-d4qf8
Jan  7 13:13:17.436: INFO: Scaling statefulset ss2 to 0
Jan  7 13:13:57.545: INFO: Waiting for statefulset status.replicas updated to 0
Jan  7 13:13:57.557: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:13:57.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-d4qf8" for this suite.
Jan  7 13:14:05.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:14:05.863: INFO: namespace: e2e-tests-statefulset-d4qf8, resource: bindings, ignored listing per whitelist
Jan  7 13:14:05.883: INFO: namespace e2e-tests-statefulset-d4qf8 deletion completed in 8.271829849s

• [SLOW TEST:233.080 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:14:05.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-954101bc-314f-11ea-8b51-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-954101bc-314f-11ea-8b51-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:15:47.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-t69ct" for this suite.
Jan  7 13:16:11.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:16:11.981: INFO: namespace: e2e-tests-projected-t69ct, resource: bindings, ignored listing per whitelist
Jan  7 13:16:12.025: INFO: namespace e2e-tests-projected-t69ct deletion completed in 24.45613419s

• [SLOW TEST:126.141 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:16:12.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:16:12.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-6wtbj" for this suite.
Jan  7 13:16:18.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:16:18.453: INFO: namespace: e2e-tests-services-6wtbj, resource: bindings, ignored listing per whitelist
Jan  7 13:16:18.667: INFO: namespace e2e-tests-services-6wtbj deletion completed in 6.401005068s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.641 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:16:18.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  7 13:16:18.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan  7 13:16:19.062: INFO: stderr: ""
Jan  7 13:16:19.063: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:16:19.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zsvdx" for this suite.
Jan  7 13:16:25.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:16:25.222: INFO: namespace: e2e-tests-kubectl-zsvdx, resource: bindings, ignored listing per whitelist
Jan  7 13:16:25.270: INFO: namespace e2e-tests-kubectl-zsvdx deletion completed in 6.183681357s

• [SLOW TEST:6.603 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:16:25.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-e82af9ba-314f-11ea-8b51-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  7 13:16:25.616: INFO: Waiting up to 5m0s for pod "pod-secrets-e8301bfb-314f-11ea-8b51-0242ac110005" in namespace "e2e-tests-secrets-c7v8x" to be "success or failure"
Jan  7 13:16:25.629: INFO: Pod "pod-secrets-e8301bfb-314f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.714431ms
Jan  7 13:16:27.665: INFO: Pod "pod-secrets-e8301bfb-314f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049156583s
Jan  7 13:16:29.680: INFO: Pod "pod-secrets-e8301bfb-314f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064355422s
Jan  7 13:16:31.840: INFO: Pod "pod-secrets-e8301bfb-314f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224392338s
Jan  7 13:16:33.858: INFO: Pod "pod-secrets-e8301bfb-314f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.241846581s
Jan  7 13:16:35.887: INFO: Pod "pod-secrets-e8301bfb-314f-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.271005782s
STEP: Saw pod success
Jan  7 13:16:35.887: INFO: Pod "pod-secrets-e8301bfb-314f-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 13:16:35.914: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-e8301bfb-314f-11ea-8b51-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  7 13:16:36.746: INFO: Waiting for pod pod-secrets-e8301bfb-314f-11ea-8b51-0242ac110005 to disappear
Jan  7 13:16:36.766: INFO: Pod pod-secrets-e8301bfb-314f-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:16:36.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-c7v8x" for this suite.
Jan  7 13:16:42.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:16:43.029: INFO: namespace: e2e-tests-secrets-c7v8x, resource: bindings, ignored listing per whitelist
Jan  7 13:16:43.147: INFO: namespace e2e-tests-secrets-c7v8x deletion completed in 6.371004625s

• [SLOW TEST:17.877 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  7 13:16:43.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  7 13:16:43.316: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2bfddb1-314f-11ea-8b51-0242ac110005" in namespace "e2e-tests-projected-lk684" to be "success or failure"
Jan  7 13:16:43.328: INFO: Pod "downwardapi-volume-f2bfddb1-314f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.177492ms
Jan  7 13:16:45.813: INFO: Pod "downwardapi-volume-f2bfddb1-314f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.497468314s
Jan  7 13:16:47.826: INFO: Pod "downwardapi-volume-f2bfddb1-314f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.509823861s
Jan  7 13:16:49.835: INFO: Pod "downwardapi-volume-f2bfddb1-314f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.519632838s
Jan  7 13:16:51.992: INFO: Pod "downwardapi-volume-f2bfddb1-314f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.676240921s
Jan  7 13:16:54.019: INFO: Pod "downwardapi-volume-f2bfddb1-314f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.703307235s
Jan  7 13:16:56.595: INFO: Pod "downwardapi-volume-f2bfddb1-314f-11ea-8b51-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.279526205s
Jan  7 13:16:58.659: INFO: Pod "downwardapi-volume-f2bfddb1-314f-11ea-8b51-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.343441118s
STEP: Saw pod success
Jan  7 13:16:58.660: INFO: Pod "downwardapi-volume-f2bfddb1-314f-11ea-8b51-0242ac110005" satisfied condition "success or failure"
Jan  7 13:16:58.706: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f2bfddb1-314f-11ea-8b51-0242ac110005 container client-container: 
STEP: delete the pod
Jan  7 13:17:00.969: INFO: Waiting for pod downwardapi-volume-f2bfddb1-314f-11ea-8b51-0242ac110005 to disappear
Jan  7 13:17:01.004: INFO: Pod downwardapi-volume-f2bfddb1-314f-11ea-8b51-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  7 13:17:01.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lk684" for this suite.
Jan  7 13:17:09.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  7 13:17:09.229: INFO: namespace: e2e-tests-projected-lk684, resource: bindings, ignored listing per whitelist
Jan  7 13:17:09.385: INFO: namespace e2e-tests-projected-lk684 deletion completed in 8.368562416s

• [SLOW TEST:26.237 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSJan  7 13:17:09.386: INFO: Running AfterSuite actions on all nodes
Jan  7 13:17:09.386: INFO: Running AfterSuite actions on node 1
Jan  7 13:17:09.386: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-api-machinery] Namespaces [Serial] [It] should ensure that all pods are removed when a namespace is deleted [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161

Ran 199 of 2164 Specs in 8992.972 seconds
FAIL! -- 198 Passed | 1 Failed | 0 Pending | 1965 Skipped --- FAIL: TestE2E (8993.63s)
FAIL