I0226 10:47:15.798082 9 e2e.go:224] Starting e2e run "59daa7f6-5885-11ea-8134-0242ac110008" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1582714034 - Will randomize all specs Will run 201 of 2164 specs Feb 26 10:47:16.313: INFO: >>> kubeConfig: /root/.kube/config Feb 26 10:47:16.320: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 26 10:47:16.346: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 26 10:47:16.377: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 26 10:47:16.377: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 26 10:47:16.377: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 26 10:47:16.385: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 26 10:47:16.385: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 26 10:47:16.385: INFO: e2e test version: v1.13.12 Feb 26 10:47:16.387: INFO: kube-apiserver version: v1.13.8 SSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:47:16.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Feb 26 10:47:16.581: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 26 10:47:16.595: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5ad22743-5885-11ea-8134-0242ac110008" in namespace "e2e-tests-downward-api-vbnm4" to be "success or failure" Feb 26 10:47:16.698: INFO: Pod "downwardapi-volume-5ad22743-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 102.831523ms Feb 26 10:47:18.717: INFO: Pod "downwardapi-volume-5ad22743-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122267939s Feb 26 10:47:20.951: INFO: Pod "downwardapi-volume-5ad22743-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.356126998s Feb 26 10:47:22.975: INFO: Pod "downwardapi-volume-5ad22743-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.379735866s Feb 26 10:47:25.560: INFO: Pod "downwardapi-volume-5ad22743-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.964839014s Feb 26 10:47:27.582: INFO: Pod "downwardapi-volume-5ad22743-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.987029901s Feb 26 10:47:29.602: INFO: Pod "downwardapi-volume-5ad22743-5885-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.007306134s STEP: Saw pod success Feb 26 10:47:29.602: INFO: Pod "downwardapi-volume-5ad22743-5885-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 10:47:29.608: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5ad22743-5885-11ea-8134-0242ac110008 container client-container: STEP: delete the pod Feb 26 10:47:29.713: INFO: Waiting for pod downwardapi-volume-5ad22743-5885-11ea-8134-0242ac110008 to disappear Feb 26 10:47:29.723: INFO: Pod downwardapi-volume-5ad22743-5885-11ea-8134-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:47:29.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vbnm4" for this suite. Feb 26 10:47:35.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:47:35.966: INFO: namespace: e2e-tests-downward-api-vbnm4, resource: bindings, ignored listing per whitelist Feb 26 10:47:35.987: INFO: namespace e2e-tests-downward-api-vbnm4 deletion completed in 6.254558793s • [SLOW TEST:19.601 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:47:35.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 26 10:47:46.345: INFO: Waiting up to 5m0s for pod "client-envvars-6c876e5d-5885-11ea-8134-0242ac110008" in namespace "e2e-tests-pods-qzggq" to be "success or failure" Feb 26 10:47:46.400: INFO: Pod "client-envvars-6c876e5d-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 54.857866ms Feb 26 10:47:48.417: INFO: Pod "client-envvars-6c876e5d-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071514952s Feb 26 10:47:50.470: INFO: Pod "client-envvars-6c876e5d-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12419199s Feb 26 10:47:52.984: INFO: Pod "client-envvars-6c876e5d-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.638513222s Feb 26 10:47:55.007: INFO: Pod "client-envvars-6c876e5d-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.661242189s Feb 26 10:47:57.029: INFO: Pod "client-envvars-6c876e5d-5885-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.683284499s STEP: Saw pod success Feb 26 10:47:57.029: INFO: Pod "client-envvars-6c876e5d-5885-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 10:47:57.039: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-6c876e5d-5885-11ea-8134-0242ac110008 container env3cont: STEP: delete the pod Feb 26 10:47:57.103: INFO: Waiting for pod client-envvars-6c876e5d-5885-11ea-8134-0242ac110008 to disappear Feb 26 10:47:57.114: INFO: Pod client-envvars-6c876e5d-5885-11ea-8134-0242ac110008 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:47:57.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-qzggq" for this suite. Feb 26 10:48:41.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:48:41.477: INFO: namespace: e2e-tests-pods-qzggq, resource: bindings, ignored listing per whitelist Feb 26 10:48:41.537: INFO: namespace e2e-tests-pods-qzggq deletion completed in 44.415643545s • [SLOW TEST:65.549 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:48:41.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 26 10:48:41.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-4xbb5' Feb 26 10:48:43.933: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 26 10:48:43.933: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Feb 26 10:48:46.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-4xbb5' Feb 26 10:48:46.770: INFO: stderr: "" Feb 26 10:48:46.770: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:48:46.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4xbb5" for this suite. Feb 26 10:48:52.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:48:53.097: INFO: namespace: e2e-tests-kubectl-4xbb5, resource: bindings, ignored listing per whitelist Feb 26 10:48:53.163: INFO: namespace e2e-tests-kubectl-4xbb5 deletion completed in 6.364600969s • [SLOW TEST:11.625 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:48:53.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 26 10:49:04.988: INFO: Successfully updated pod "annotationupdate94796ccf-5885-11ea-8134-0242ac110008" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:49:07.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-975l8" for this suite. Feb 26 10:49:29.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:49:29.521: INFO: namespace: e2e-tests-downward-api-975l8, resource: bindings, ignored listing per whitelist Feb 26 10:49:29.521: INFO: namespace e2e-tests-downward-api-975l8 deletion completed in 22.271567408s • [SLOW TEST:36.358 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:49:29.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 26 10:49:29.779: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:49:40.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-66jmb" for this suite. Feb 26 10:50:22.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:50:22.521: INFO: namespace: e2e-tests-pods-66jmb, resource: bindings, ignored listing per whitelist Feb 26 10:50:22.548: INFO: namespace e2e-tests-pods-66jmb deletion completed in 42.210201724s • [SLOW TEST:53.025 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:50:22.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Feb 26 10:50:22.750: INFO: Waiting up to 5m0s for pod "var-expansion-c9c69ba9-5885-11ea-8134-0242ac110008" in namespace "e2e-tests-var-expansion-jnj58" to be "success or failure" Feb 26 10:50:22.767: INFO: Pod "var-expansion-c9c69ba9-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.896309ms Feb 26 10:50:25.264: INFO: Pod "var-expansion-c9c69ba9-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.513475398s Feb 26 10:50:27.294: INFO: Pod "var-expansion-c9c69ba9-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.543588085s Feb 26 10:50:29.307: INFO: Pod "var-expansion-c9c69ba9-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.556547096s Feb 26 10:50:31.322: INFO: Pod "var-expansion-c9c69ba9-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.571374479s Feb 26 10:50:33.340: INFO: Pod "var-expansion-c9c69ba9-5885-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.589539933s STEP: Saw pod success Feb 26 10:50:33.340: INFO: Pod "var-expansion-c9c69ba9-5885-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 10:50:33.346: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-c9c69ba9-5885-11ea-8134-0242ac110008 container dapi-container: STEP: delete the pod Feb 26 10:50:33.497: INFO: Waiting for pod var-expansion-c9c69ba9-5885-11ea-8134-0242ac110008 to disappear Feb 26 10:50:33.510: INFO: Pod var-expansion-c9c69ba9-5885-11ea-8134-0242ac110008 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:50:33.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-jnj58" for this suite. Feb 26 10:50:39.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:50:39.741: INFO: namespace: e2e-tests-var-expansion-jnj58, resource: bindings, ignored listing per whitelist Feb 26 10:50:39.835: INFO: namespace e2e-tests-var-expansion-jnj58 deletion completed in 6.310092043s • [SLOW TEST:17.287 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:50:39.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-d4149d16-5885-11ea-8134-0242ac110008 STEP: Creating a pod to test consume secrets Feb 26 10:50:40.047: INFO: Waiting up to 5m0s for pod "pod-secrets-d415e6b8-5885-11ea-8134-0242ac110008" in namespace "e2e-tests-secrets-m674r" to be "success or failure" Feb 26 10:50:40.058: INFO: Pod "pod-secrets-d415e6b8-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.300571ms Feb 26 10:50:42.076: INFO: Pod "pod-secrets-d415e6b8-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028821418s Feb 26 10:50:44.450: INFO: Pod "pod-secrets-d415e6b8-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.403007063s Feb 26 10:50:46.506: INFO: Pod "pod-secrets-d415e6b8-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.459200776s Feb 26 10:50:48.543: INFO: Pod "pod-secrets-d415e6b8-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.495999186s Feb 26 10:50:50.569: INFO: Pod "pod-secrets-d415e6b8-5885-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.521979882s STEP: Saw pod success Feb 26 10:50:50.569: INFO: Pod "pod-secrets-d415e6b8-5885-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 10:50:50.575: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d415e6b8-5885-11ea-8134-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 26 10:50:50.680: INFO: Waiting for pod pod-secrets-d415e6b8-5885-11ea-8134-0242ac110008 to disappear Feb 26 10:50:50.694: INFO: Pod pod-secrets-d415e6b8-5885-11ea-8134-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:50:50.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-m674r" for this suite. Feb 26 10:50:56.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:50:56.831: INFO: namespace: e2e-tests-secrets-m674r, resource: bindings, ignored listing per whitelist Feb 26 10:50:56.868: INFO: namespace e2e-tests-secrets-m674r deletion completed in 6.169028771s • [SLOW TEST:17.033 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:50:56.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-de3222a9-5885-11ea-8134-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 26 10:50:57.071: INFO: Waiting up to 5m0s for pod "pod-configmaps-de3bb355-5885-11ea-8134-0242ac110008" in namespace "e2e-tests-configmap-gtchl" to be "success or failure" Feb 26 10:50:57.085: INFO: Pod "pod-configmaps-de3bb355-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.025012ms Feb 26 10:50:59.098: INFO: Pod "pod-configmaps-de3bb355-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026599247s Feb 26 10:51:01.545: INFO: Pod "pod-configmaps-de3bb355-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.474165827s Feb 26 10:51:03.568: INFO: Pod "pod-configmaps-de3bb355-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.496822625s Feb 26 10:51:05.579: INFO: Pod "pod-configmaps-de3bb355-5885-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.507788661s STEP: Saw pod success Feb 26 10:51:05.579: INFO: Pod "pod-configmaps-de3bb355-5885-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 10:51:05.581: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-de3bb355-5885-11ea-8134-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 26 10:51:06.090: INFO: Waiting for pod pod-configmaps-de3bb355-5885-11ea-8134-0242ac110008 to disappear Feb 26 10:51:06.632: INFO: Pod pod-configmaps-de3bb355-5885-11ea-8134-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:51:06.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gtchl" for this suite. Feb 26 10:51:12.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:51:12.941: INFO: namespace: e2e-tests-configmap-gtchl, resource: bindings, ignored listing per whitelist Feb 26 10:51:12.995: INFO: namespace e2e-tests-configmap-gtchl deletion completed in 6.331532957s • [SLOW TEST:16.126 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:51:12.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 26 10:51:13.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 26 10:51:13.259: INFO: stderr: "" Feb 26 10:51:13.260: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:51:13.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-f68sx" for this suite. Feb 26 10:51:19.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:51:19.393: INFO: namespace: e2e-tests-kubectl-f68sx, resource: bindings, ignored listing per whitelist Feb 26 10:51:19.443: INFO: namespace e2e-tests-kubectl-f68sx deletion completed in 6.161956889s • [SLOW TEST:6.448 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:51:19.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 26 10:51:19.682: INFO: Waiting up to 5m0s for pod "downward-api-ebb22d30-5885-11ea-8134-0242ac110008" in namespace "e2e-tests-downward-api-mnxkt" to be "success or failure" Feb 26 10:51:19.690: INFO: Pod "downward-api-ebb22d30-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.707875ms Feb 26 10:51:21.703: INFO: Pod "downward-api-ebb22d30-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020022207s Feb 26 10:51:23.721: INFO: Pod "downward-api-ebb22d30-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037891962s Feb 26 10:51:27.030: INFO: Pod "downward-api-ebb22d30-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.347720648s Feb 26 10:51:29.051: INFO: Pod "downward-api-ebb22d30-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.367957517s Feb 26 10:51:31.087: INFO: Pod "downward-api-ebb22d30-5885-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.404401994s STEP: Saw pod success Feb 26 10:51:31.087: INFO: Pod "downward-api-ebb22d30-5885-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 10:51:31.094: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-ebb22d30-5885-11ea-8134-0242ac110008 container dapi-container: STEP: delete the pod Feb 26 10:51:31.275: INFO: Waiting for pod downward-api-ebb22d30-5885-11ea-8134-0242ac110008 to disappear Feb 26 10:51:31.327: INFO: Pod downward-api-ebb22d30-5885-11ea-8134-0242ac110008 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:51:31.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mnxkt" for this suite. Feb 26 10:51:37.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:51:37.544: INFO: namespace: e2e-tests-downward-api-mnxkt, resource: bindings, ignored listing per whitelist Feb 26 10:51:37.600: INFO: namespace e2e-tests-downward-api-mnxkt deletion completed in 6.249360471s • [SLOW TEST:18.156 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:51:37.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 26 10:51:37.887: INFO: Waiting up to 5m0s for pod "pod-f67e63ab-5885-11ea-8134-0242ac110008" in namespace "e2e-tests-emptydir-2t77n" to be "success or failure" Feb 26 10:51:37.911: INFO: Pod "pod-f67e63ab-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 23.41726ms Feb 26 10:51:39.922: INFO: Pod "pod-f67e63ab-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03420562s Feb 26 10:51:41.941: INFO: Pod "pod-f67e63ab-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052616805s Feb 26 10:51:43.959: INFO: Pod "pod-f67e63ab-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071355485s Feb 26 10:51:45.978: INFO: Pod "pod-f67e63ab-5885-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090241884s Feb 26 10:51:47.993: INFO: Pod "pod-f67e63ab-5885-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105560685s STEP: Saw pod success Feb 26 10:51:47.994: INFO: Pod "pod-f67e63ab-5885-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 10:51:48.001: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f67e63ab-5885-11ea-8134-0242ac110008 container test-container: STEP: delete the pod Feb 26 10:51:48.545: INFO: Waiting for pod pod-f67e63ab-5885-11ea-8134-0242ac110008 to disappear Feb 26 10:51:48.676: INFO: Pod pod-f67e63ab-5885-11ea-8134-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:51:48.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2t77n" for this suite. Feb 26 10:51:54.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:51:54.917: INFO: namespace: e2e-tests-emptydir-2t77n, resource: bindings, ignored listing per whitelist Feb 26 10:51:54.976: INFO: namespace e2e-tests-emptydir-2t77n deletion completed in 6.27319494s • [SLOW TEST:17.376 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:51:54.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-00dd499d-5886-11ea-8134-0242ac110008 STEP: Creating configMap with name cm-test-opt-upd-00dd4b24-5886-11ea-8134-0242ac110008 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-00dd499d-5886-11ea-8134-0242ac110008 STEP: Updating configmap cm-test-opt-upd-00dd4b24-5886-11ea-8134-0242ac110008 STEP: Creating configMap with name cm-test-opt-create-00dd4b3c-5886-11ea-8134-0242ac110008 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:53:33.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lj2v6" for this suite. Feb 26 10:53:57.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:53:57.818: INFO: namespace: e2e-tests-projected-lj2v6, resource: bindings, ignored listing per whitelist Feb 26 10:53:57.857: INFO: namespace e2e-tests-projected-lj2v6 deletion completed in 24.174141133s • [SLOW TEST:122.881 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:53:57.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-4a1e5021-5886-11ea-8134-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 26 10:53:58.123: INFO: Waiting up to 5m0s for pod "pod-configmaps-4a209a1d-5886-11ea-8134-0242ac110008" in namespace "e2e-tests-configmap-5blfl" to be "success or failure" Feb 26 10:53:58.330: INFO: Pod "pod-configmaps-4a209a1d-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 207.34138ms Feb 26 10:54:00.345: INFO: Pod "pod-configmaps-4a209a1d-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222297046s Feb 26 10:54:02.356: INFO: Pod "pod-configmaps-4a209a1d-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233173745s Feb 26 10:54:04.380: INFO: Pod "pod-configmaps-4a209a1d-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.257187967s Feb 26 10:54:06.424: INFO: Pod "pod-configmaps-4a209a1d-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.301407423s Feb 26 10:54:08.459: INFO: Pod "pod-configmaps-4a209a1d-5886-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.336430668s STEP: Saw pod success Feb 26 10:54:08.460: INFO: Pod "pod-configmaps-4a209a1d-5886-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 10:54:08.471: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-4a209a1d-5886-11ea-8134-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 26 10:54:08.795: INFO: Waiting for pod pod-configmaps-4a209a1d-5886-11ea-8134-0242ac110008 to disappear Feb 26 10:54:09.601: INFO: Pod pod-configmaps-4a209a1d-5886-11ea-8134-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:54:09.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-5blfl" for this suite. Feb 26 10:54:15.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:54:15.844: INFO: namespace: e2e-tests-configmap-5blfl, resource: bindings, ignored listing per whitelist Feb 26 10:54:15.869: INFO: namespace e2e-tests-configmap-5blfl deletion completed in 6.258277299s • [SLOW TEST:18.011 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:54:15.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:54:16.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-d8rdl" for this suite. Feb 26 10:54:22.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:54:22.274: INFO: namespace: e2e-tests-services-d8rdl, resource: bindings, ignored listing per whitelist Feb 26 10:54:22.293: INFO: namespace e2e-tests-services-d8rdl deletion completed in 6.217440677s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.423 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:54:22.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-58a7e53b-5886-11ea-8134-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 26 10:54:22.608: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-58a8a578-5886-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-lj4vc" to be "success or failure" Feb 26 10:54:22.648: INFO: Pod "pod-projected-configmaps-58a8a578-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 39.671625ms Feb 26 10:54:24.672: INFO: Pod "pod-projected-configmaps-58a8a578-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064340074s Feb 26 10:54:26.716: INFO: Pod "pod-projected-configmaps-58a8a578-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107672661s Feb 26 10:54:28.887: INFO: Pod "pod-projected-configmaps-58a8a578-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.279567703s Feb 26 10:54:30.924: INFO: Pod "pod-projected-configmaps-58a8a578-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.315913716s Feb 26 10:54:32.941: INFO: Pod "pod-projected-configmaps-58a8a578-5886-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.332690489s STEP: Saw pod success Feb 26 10:54:32.941: INFO: Pod "pod-projected-configmaps-58a8a578-5886-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 10:54:32.946: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-58a8a578-5886-11ea-8134-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 26 10:54:33.096: INFO: Waiting for pod pod-projected-configmaps-58a8a578-5886-11ea-8134-0242ac110008 to disappear Feb 26 10:54:33.110: INFO: Pod pod-projected-configmaps-58a8a578-5886-11ea-8134-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:54:33.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lj4vc" for this suite. Feb 26 10:54:39.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:54:39.270: INFO: namespace: e2e-tests-projected-lj4vc, resource: bindings, ignored listing per whitelist Feb 26 10:54:39.385: INFO: namespace e2e-tests-projected-lj4vc deletion completed in 6.268803297s • [SLOW TEST:17.092 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:54:39.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-62e06a8d-5886-11ea-8134-0242ac110008 STEP: Creating a pod to test consume secrets Feb 26 10:54:39.724: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-62e38d3b-5886-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-zwhxr" to be "success or failure" Feb 26 10:54:39.753: INFO: Pod "pod-projected-secrets-62e38d3b-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 28.662589ms Feb 26 10:54:41.841: INFO: Pod "pod-projected-secrets-62e38d3b-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116611827s Feb 26 10:54:43.870: INFO: Pod "pod-projected-secrets-62e38d3b-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145819964s Feb 26 10:54:45.896: INFO: Pod "pod-projected-secrets-62e38d3b-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.171428047s Feb 26 10:54:47.917: INFO: Pod "pod-projected-secrets-62e38d3b-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.193066231s Feb 26 10:54:49.935: INFO: Pod "pod-projected-secrets-62e38d3b-5886-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.211092601s STEP: Saw pod success Feb 26 10:54:49.936: INFO: Pod "pod-projected-secrets-62e38d3b-5886-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 10:54:49.942: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-62e38d3b-5886-11ea-8134-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 26 10:54:50.290: INFO: Waiting for pod pod-projected-secrets-62e38d3b-5886-11ea-8134-0242ac110008 to disappear Feb 26 10:54:50.320: INFO: Pod pod-projected-secrets-62e38d3b-5886-11ea-8134-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:54:50.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zwhxr" for this suite. Feb 26 10:54:56.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:54:56.787: INFO: namespace: e2e-tests-projected-zwhxr, resource: bindings, ignored listing per whitelist Feb 26 10:54:56.794: INFO: namespace e2e-tests-projected-zwhxr deletion completed in 6.465262487s • [SLOW TEST:17.408 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:54:56.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 26 10:54:57.098: INFO: Waiting up to 5m0s for pod "downward-api-6d410439-5886-11ea-8134-0242ac110008" in namespace "e2e-tests-downward-api-9hm9s" to be "success or failure" Feb 26 10:54:57.117: INFO: Pod "downward-api-6d410439-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 17.914076ms Feb 26 10:54:59.128: INFO: Pod "downward-api-6d410439-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029720929s Feb 26 10:55:01.154: INFO: Pod "downward-api-6d410439-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055554155s Feb 26 10:55:03.216: INFO: Pod "downward-api-6d410439-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117449254s Feb 26 10:55:05.235: INFO: Pod "downward-api-6d410439-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135858754s Feb 26 10:55:07.263: INFO: Pod "downward-api-6d410439-5886-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.163934393s STEP: Saw pod success Feb 26 10:55:07.263: INFO: Pod "downward-api-6d410439-5886-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 10:55:07.272: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-6d410439-5886-11ea-8134-0242ac110008 container dapi-container: STEP: delete the pod Feb 26 10:55:07.582: INFO: Waiting for pod downward-api-6d410439-5886-11ea-8134-0242ac110008 to disappear Feb 26 10:55:07.608: INFO: Pod downward-api-6d410439-5886-11ea-8134-0242ac110008 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:55:07.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9hm9s" for this suite. Feb 26 10:55:13.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:55:13.825: INFO: namespace: e2e-tests-downward-api-9hm9s, resource: bindings, ignored listing per whitelist Feb 26 10:55:13.876: INFO: namespace e2e-tests-downward-api-9hm9s deletion completed in 6.259105507s • [SLOW TEST:17.082 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:55:13.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 26 10:55:14.920: INFO: Waiting up to 5m0s for pod "downwardapi-volume-77e6974c-5886-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-gdgbt" to be "success or failure" Feb 26 10:55:15.140: INFO: Pod "downwardapi-volume-77e6974c-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 219.714303ms Feb 26 10:55:17.277: INFO: Pod "downwardapi-volume-77e6974c-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.357160811s Feb 26 10:55:19.301: INFO: Pod "downwardapi-volume-77e6974c-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.381247176s Feb 26 10:55:21.329: INFO: Pod "downwardapi-volume-77e6974c-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.408846111s Feb 26 10:55:23.357: INFO: Pod "downwardapi-volume-77e6974c-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.436389518s Feb 26 10:55:25.375: INFO: Pod "downwardapi-volume-77e6974c-5886-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.454988247s STEP: Saw pod success Feb 26 10:55:25.375: INFO: Pod "downwardapi-volume-77e6974c-5886-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 10:55:25.382: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-77e6974c-5886-11ea-8134-0242ac110008 container client-container: STEP: delete the pod Feb 26 10:55:25.446: INFO: Waiting for pod downwardapi-volume-77e6974c-5886-11ea-8134-0242ac110008 to disappear Feb 26 10:55:25.589: INFO: Pod downwardapi-volume-77e6974c-5886-11ea-8134-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:55:25.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gdgbt" for this suite. Feb 26 10:55:31.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:55:31.672: INFO: namespace: e2e-tests-projected-gdgbt, resource: bindings, ignored listing per whitelist Feb 26 10:55:31.838: INFO: namespace e2e-tests-projected-gdgbt deletion completed in 6.237868875s • [SLOW TEST:17.961 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:55:31.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 26 10:55:32.185: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-smzm5,SelfLink:/api/v1/namespaces/e2e-tests-watch-smzm5/configmaps/e2e-watch-test-resource-version,UID:822cf995-5886-11ea-a994-fa163e34d433,ResourceVersion:22967778,Generation:0,CreationTimestamp:2020-02-26 10:55:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 26 10:55:32.186: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-smzm5,SelfLink:/api/v1/namespaces/e2e-tests-watch-smzm5/configmaps/e2e-watch-test-resource-version,UID:822cf995-5886-11ea-a994-fa163e34d433,ResourceVersion:22967779,Generation:0,CreationTimestamp:2020-02-26 10:55:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:55:32.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-smzm5" for this suite. Feb 26 10:55:38.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:55:38.493: INFO: namespace: e2e-tests-watch-smzm5, resource: bindings, ignored listing per whitelist Feb 26 10:55:38.502: INFO: namespace e2e-tests-watch-smzm5 deletion completed in 6.310280122s • [SLOW TEST:6.663 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:55:38.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:55:46.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-4dlrk" for this suite. Feb 26 10:56:34.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:56:35.211: INFO: namespace: e2e-tests-kubelet-test-4dlrk, resource: bindings, ignored listing per whitelist Feb 26 10:56:35.300: INFO: namespace e2e-tests-kubelet-test-4dlrk deletion completed in 48.387679926s • [SLOW TEST:56.798 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:56:35.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 26 10:56:35.705: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8105ff9-5886-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-2s6bc" to be "success or failure" Feb 26 10:56:35.738: INFO: Pod "downwardapi-volume-a8105ff9-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 33.035097ms Feb 26 10:56:37.752: INFO: Pod "downwardapi-volume-a8105ff9-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046766793s Feb 26 10:56:39.769: INFO: Pod "downwardapi-volume-a8105ff9-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063629711s Feb 26 10:56:41.980: INFO: Pod "downwardapi-volume-a8105ff9-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.275592433s Feb 26 10:56:44.414: INFO: Pod "downwardapi-volume-a8105ff9-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.708807218s Feb 26 10:56:46.468: INFO: Pod "downwardapi-volume-a8105ff9-5886-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.762725188s STEP: Saw pod success Feb 26 10:56:46.468: INFO: Pod "downwardapi-volume-a8105ff9-5886-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 10:56:46.482: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a8105ff9-5886-11ea-8134-0242ac110008 container client-container: STEP: delete the pod Feb 26 10:56:46.678: INFO: Waiting for pod downwardapi-volume-a8105ff9-5886-11ea-8134-0242ac110008 to disappear Feb 26 10:56:46.701: INFO: Pod downwardapi-volume-a8105ff9-5886-11ea-8134-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:56:46.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2s6bc" for this suite. Feb 26 10:56:52.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:56:52.907: INFO: namespace: e2e-tests-projected-2s6bc, resource: bindings, ignored listing per whitelist Feb 26 10:56:52.983: INFO: namespace e2e-tests-projected-2s6bc deletion completed in 6.274515644s • [SLOW TEST:17.683 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:56:52.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 26 10:56:53.153: INFO: Waiting up to 5m0s for pod "pod-b278396e-5886-11ea-8134-0242ac110008" in namespace "e2e-tests-emptydir-cs48m" to be "success or failure" Feb 26 10:56:53.158: INFO: Pod "pod-b278396e-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.642205ms Feb 26 10:56:55.346: INFO: Pod "pod-b278396e-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192710008s Feb 26 10:56:57.372: INFO: Pod "pod-b278396e-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218241132s Feb 26 10:56:59.392: INFO: Pod "pod-b278396e-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.238190459s Feb 26 10:57:01.409: INFO: Pod "pod-b278396e-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.255588252s Feb 26 10:57:03.434: INFO: Pod "pod-b278396e-5886-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.280378515s STEP: Saw pod success Feb 26 10:57:03.434: INFO: Pod "pod-b278396e-5886-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 10:57:03.444: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b278396e-5886-11ea-8134-0242ac110008 container test-container: STEP: delete the pod Feb 26 10:57:03.549: INFO: Waiting for pod pod-b278396e-5886-11ea-8134-0242ac110008 to disappear Feb 26 10:57:03.612: INFO: Pod pod-b278396e-5886-11ea-8134-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:57:03.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-cs48m" for this suite. Feb 26 10:57:09.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:57:10.020: INFO: namespace: e2e-tests-emptydir-cs48m, resource: bindings, ignored listing per whitelist Feb 26 10:57:10.032: INFO: namespace e2e-tests-emptydir-cs48m deletion completed in 6.410823794s • [SLOW TEST:17.048 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:57:10.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 26 10:57:28.549: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 26 10:57:28.659: INFO: Pod pod-with-poststart-http-hook still exists Feb 26 10:57:30.659: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 26 10:57:30.727: INFO: Pod pod-with-poststart-http-hook still exists Feb 26 10:57:32.660: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 26 10:57:32.695: INFO: Pod pod-with-poststart-http-hook still exists Feb 26 10:57:34.659: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 26 10:57:34.674: INFO: Pod pod-with-poststart-http-hook still exists Feb 26 10:57:36.660: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 26 10:57:36.677: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:57:36.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-rl7b8" for this suite. Feb 26 10:58:00.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:58:00.750: INFO: namespace: e2e-tests-container-lifecycle-hook-rl7b8, resource: bindings, ignored listing per whitelist Feb 26 10:58:00.898: INFO: namespace e2e-tests-container-lifecycle-hook-rl7b8 deletion completed in 24.213204924s • [SLOW TEST:50.866 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:58:00.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Feb 26 10:58:01.217: INFO: Waiting up to 5m0s for pod "client-containers-db099ab4-5886-11ea-8134-0242ac110008" in namespace "e2e-tests-containers-z6bpx" to be "success or failure" Feb 26 10:58:01.336: INFO: Pod "client-containers-db099ab4-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 118.841933ms Feb 26 10:58:03.363: INFO: Pod "client-containers-db099ab4-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145575241s Feb 26 10:58:05.393: INFO: Pod "client-containers-db099ab4-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175447971s Feb 26 10:58:07.412: INFO: Pod "client-containers-db099ab4-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.194464063s Feb 26 10:58:09.422: INFO: Pod "client-containers-db099ab4-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.204378991s Feb 26 10:58:11.508: INFO: Pod "client-containers-db099ab4-5886-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.290649011s STEP: Saw pod success Feb 26 10:58:11.508: INFO: Pod "client-containers-db099ab4-5886-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 10:58:11.521: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-db099ab4-5886-11ea-8134-0242ac110008 container test-container: STEP: delete the pod Feb 26 10:58:11.662: INFO: Waiting for pod client-containers-db099ab4-5886-11ea-8134-0242ac110008 to disappear Feb 26 10:58:11.681: INFO: Pod client-containers-db099ab4-5886-11ea-8134-0242ac110008 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:58:11.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-z6bpx" for this suite. Feb 26 10:58:17.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:58:17.984: INFO: namespace: e2e-tests-containers-z6bpx, resource: bindings, ignored listing per whitelist Feb 26 10:58:18.097: INFO: namespace e2e-tests-containers-z6bpx deletion completed in 6.394239872s • [SLOW TEST:17.198 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:58:18.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-e53ccd9e-5886-11ea-8134-0242ac110008 STEP: Creating a pod to test consume secrets Feb 26 10:58:18.321: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e53d5f09-5886-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-cqblv" to be "success or failure" Feb 26 10:58:18.355: INFO: Pod "pod-projected-secrets-e53d5f09-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 33.604368ms Feb 26 10:58:21.153: INFO: Pod "pod-projected-secrets-e53d5f09-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.832216425s Feb 26 10:58:23.169: INFO: Pod "pod-projected-secrets-e53d5f09-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.847837976s Feb 26 10:58:25.183: INFO: Pod "pod-projected-secrets-e53d5f09-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.861420473s Feb 26 10:58:27.196: INFO: Pod "pod-projected-secrets-e53d5f09-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.874536434s Feb 26 10:58:29.210: INFO: Pod "pod-projected-secrets-e53d5f09-5886-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.889037331s Feb 26 10:58:31.241: INFO: Pod "pod-projected-secrets-e53d5f09-5886-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.919790814s STEP: Saw pod success Feb 26 10:58:31.241: INFO: Pod "pod-projected-secrets-e53d5f09-5886-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 10:58:31.255: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e53d5f09-5886-11ea-8134-0242ac110008 container projected-secret-volume-test: STEP: delete the pod Feb 26 10:58:31.403: INFO: Waiting for pod pod-projected-secrets-e53d5f09-5886-11ea-8134-0242ac110008 to disappear Feb 26 10:58:31.421: INFO: Pod pod-projected-secrets-e53d5f09-5886-11ea-8134-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:58:31.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cqblv" for this suite. Feb 26 10:58:37.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:58:37.563: INFO: namespace: e2e-tests-projected-cqblv, resource: bindings, ignored listing per whitelist Feb 26 10:58:37.582: INFO: namespace e2e-tests-projected-cqblv deletion completed in 6.154465131s • [SLOW TEST:19.484 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:58:37.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 26 10:58:37.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-6t8wv' Feb 26 10:58:37.933: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 26 10:58:37.933: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Feb 26 10:58:38.062: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-467jx] Feb 26 10:58:38.062: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-467jx" in namespace "e2e-tests-kubectl-6t8wv" to be "running and ready" Feb 26 10:58:38.220: INFO: Pod "e2e-test-nginx-rc-467jx": Phase="Pending", Reason="", readiness=false. Elapsed: 157.956055ms Feb 26 10:58:40.237: INFO: Pod "e2e-test-nginx-rc-467jx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174297735s Feb 26 10:58:42.259: INFO: Pod "e2e-test-nginx-rc-467jx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197197973s Feb 26 10:58:44.270: INFO: Pod "e2e-test-nginx-rc-467jx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207330288s Feb 26 10:58:46.285: INFO: Pod "e2e-test-nginx-rc-467jx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.222478687s Feb 26 10:58:48.300: INFO: Pod "e2e-test-nginx-rc-467jx": Phase="Running", Reason="", readiness=true. Elapsed: 10.237411143s Feb 26 10:58:48.300: INFO: Pod "e2e-test-nginx-rc-467jx" satisfied condition "running and ready" Feb 26 10:58:48.300: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-467jx] Feb 26 10:58:48.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-6t8wv' Feb 26 10:58:51.021: INFO: stderr: "" Feb 26 10:58:51.022: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Feb 26 10:58:51.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-6t8wv' Feb 26 10:58:51.172: INFO: stderr: "" Feb 26 10:58:51.172: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:58:51.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6t8wv" for this suite. Feb 26 10:59:13.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:59:13.346: INFO: namespace: e2e-tests-kubectl-6t8wv, resource: bindings, ignored listing per whitelist Feb 26 10:59:13.375: INFO: namespace e2e-tests-kubectl-6t8wv deletion completed in 22.177301375s • [SLOW TEST:35.793 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:59:13.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-5dpqd in namespace e2e-tests-proxy-x79mm I0226 10:59:13.650091 9 runners.go:184] Created replication controller with name: proxy-service-5dpqd, namespace: e2e-tests-proxy-x79mm, replica count: 1 I0226 10:59:14.701950 9 runners.go:184] proxy-service-5dpqd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 10:59:15.702896 9 runners.go:184] proxy-service-5dpqd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 10:59:16.704150 9 runners.go:184] proxy-service-5dpqd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 10:59:17.704692 9 runners.go:184] proxy-service-5dpqd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 10:59:18.705409 9 runners.go:184] proxy-service-5dpqd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 10:59:19.705996 9 runners.go:184] proxy-service-5dpqd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 10:59:20.706653 9 runners.go:184] proxy-service-5dpqd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0226 10:59:21.707425 9 runners.go:184] proxy-service-5dpqd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0226 10:59:22.707854 9 runners.go:184] proxy-service-5dpqd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0226 10:59:23.708640 9 runners.go:184] proxy-service-5dpqd Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 26 10:59:23.724: INFO: setup took 10.158158248s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 26 10:59:23.768: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-x79mm/services/http:proxy-service-5dpqd:portname1/proxy/: foo (200; 44.183918ms) Feb 26 10:59:23.768: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-x79mm/services/proxy-service-5dpqd:portname2/proxy/: bar (200; 43.883938ms) Feb 26 10:59:23.769: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-x79mm/pods/proxy-service-5dpqd-bjx9c:160/proxy/: foo (200; 44.251602ms) Feb 26 10:59:23.772: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-x79mm/pods/http:proxy-service-5dpqd-bjx9c:160/proxy/: foo (200; 47.496744ms) Feb 26 10:59:23.775: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-x79mm/pods/http:proxy-service-5dpqd-bjx9c:162/proxy/: bar (200; 50.054979ms) Feb 26 10:59:23.775: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-x79mm/pods/proxy-service-5dpqd-bjx9c:162/proxy/: bar (200; 50.658969ms) Feb 26 10:59:23.781: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-x79mm/pods/proxy-service-5dpqd-bjx9c/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-792wf/configmap-test-14547710-5887-11ea-8134-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 26 10:59:37.333: INFO: Waiting up to 5m0s for pod "pod-configmaps-1456244b-5887-11ea-8134-0242ac110008" in namespace "e2e-tests-configmap-792wf" to be "success or failure" Feb 26 10:59:37.345: INFO: Pod "pod-configmaps-1456244b-5887-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.829565ms Feb 26 10:59:39.362: INFO: Pod "pod-configmaps-1456244b-5887-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028814056s Feb 26 10:59:41.373: INFO: Pod "pod-configmaps-1456244b-5887-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040329135s Feb 26 10:59:43.918: INFO: Pod "pod-configmaps-1456244b-5887-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.585077219s Feb 26 10:59:45.937: INFO: Pod "pod-configmaps-1456244b-5887-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.603940737s Feb 26 10:59:47.948: INFO: Pod "pod-configmaps-1456244b-5887-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.614454196s STEP: Saw pod success Feb 26 10:59:47.948: INFO: Pod "pod-configmaps-1456244b-5887-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 10:59:47.951: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-1456244b-5887-11ea-8134-0242ac110008 container env-test: STEP: delete the pod Feb 26 10:59:48.395: INFO: Waiting for pod pod-configmaps-1456244b-5887-11ea-8134-0242ac110008 to disappear Feb 26 10:59:48.418: INFO: Pod pod-configmaps-1456244b-5887-11ea-8134-0242ac110008 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 10:59:48.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-792wf" for this suite. Feb 26 10:59:54.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 10:59:54.591: INFO: namespace: e2e-tests-configmap-792wf, resource: bindings, ignored listing per whitelist Feb 26 10:59:54.771: INFO: namespace e2e-tests-configmap-792wf deletion completed in 6.343127259s • [SLOW TEST:17.644 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 10:59:54.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-mrtpd STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-mrtpd STEP: Deleting pre-stop pod Feb 26 11:00:20.256: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 11:00:20.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-mrtpd" for this suite. Feb 26 11:01:06.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 11:01:06.657: INFO: namespace: e2e-tests-prestop-mrtpd, resource: bindings, ignored listing per whitelist Feb 26 11:01:06.717: INFO: namespace e2e-tests-prestop-mrtpd deletion completed in 46.373385784s • [SLOW TEST:71.946 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 11:01:06.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 26 11:01:06.973: INFO: Waiting up to 5m0s for pod "pod-49bf37dd-5887-11ea-8134-0242ac110008" in namespace "e2e-tests-emptydir-ppph7" to be "success or failure" Feb 26 11:01:07.208: INFO: Pod "pod-49bf37dd-5887-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 234.438264ms Feb 26 11:01:09.550: INFO: Pod "pod-49bf37dd-5887-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.576648395s Feb 26 11:01:11.572: INFO: Pod "pod-49bf37dd-5887-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.599199564s Feb 26 11:01:14.568: INFO: Pod "pod-49bf37dd-5887-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.594691984s Feb 26 11:01:16.593: INFO: Pod "pod-49bf37dd-5887-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.619572908s Feb 26 11:01:18.610: INFO: Pod "pod-49bf37dd-5887-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.637185031s STEP: Saw pod success Feb 26 11:01:18.611: INFO: Pod "pod-49bf37dd-5887-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 11:01:18.620: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-49bf37dd-5887-11ea-8134-0242ac110008 container test-container: STEP: delete the pod Feb 26 11:01:19.273: INFO: Waiting for pod pod-49bf37dd-5887-11ea-8134-0242ac110008 to disappear Feb 26 11:01:19.283: INFO: Pod pod-49bf37dd-5887-11ea-8134-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 11:01:19.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ppph7" for this suite. Feb 26 11:01:27.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 11:01:27.757: INFO: namespace: e2e-tests-emptydir-ppph7, resource: bindings, ignored listing per whitelist Feb 26 11:01:27.844: INFO: namespace e2e-tests-emptydir-ppph7 deletion completed in 8.339156507s • [SLOW TEST:21.126 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 11:01:27.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 26 11:01:38.701: INFO: Successfully updated pod "pod-update-5647c2b9-5887-11ea-8134-0242ac110008" STEP: verifying the updated pod is in kubernetes Feb 26 11:01:38.739: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 11:01:38.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-pwhdd" for this suite. Feb 26 11:02:04.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 11:02:04.948: INFO: namespace: e2e-tests-pods-pwhdd, resource: bindings, ignored listing per whitelist Feb 26 11:02:04.998: INFO: namespace e2e-tests-pods-pwhdd deletion completed in 26.251051534s • [SLOW TEST:37.154 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 11:02:04.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Feb 26 11:02:05.181: INFO: Waiting up to 5m0s for pod "client-containers-6c753431-5887-11ea-8134-0242ac110008" in namespace "e2e-tests-containers-d9p8d" to be "success or failure" Feb 26 11:02:05.199: INFO: Pod "client-containers-6c753431-5887-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 18.204769ms Feb 26 11:02:07.230: INFO: Pod "client-containers-6c753431-5887-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049265919s Feb 26 11:02:09.404: INFO: Pod "client-containers-6c753431-5887-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222720208s Feb 26 11:02:11.500: INFO: Pod "client-containers-6c753431-5887-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.318779427s Feb 26 11:02:13.513: INFO: Pod "client-containers-6c753431-5887-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.33210457s Feb 26 11:02:15.959: INFO: Pod "client-containers-6c753431-5887-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.777654648s STEP: Saw pod success Feb 26 11:02:15.959: INFO: Pod "client-containers-6c753431-5887-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 11:02:16.359: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-6c753431-5887-11ea-8134-0242ac110008 container test-container: STEP: delete the pod Feb 26 11:02:16.526: INFO: Waiting for pod client-containers-6c753431-5887-11ea-8134-0242ac110008 to disappear Feb 26 11:02:16.549: INFO: Pod client-containers-6c753431-5887-11ea-8134-0242ac110008 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 11:02:16.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-d9p8d" for this suite. Feb 26 11:02:24.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 11:02:24.817: INFO: namespace: e2e-tests-containers-d9p8d, resource: bindings, ignored listing per whitelist Feb 26 11:02:24.851: INFO: namespace e2e-tests-containers-d9p8d deletion completed in 8.286535225s • [SLOW TEST:19.852 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 11:02:24.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 26 11:02:25.231: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 26 11:02:25.259: INFO: Number of nodes with available pods: 0 Feb 26 11:02:25.259: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 26 11:02:25.413: INFO: Number of nodes with available pods: 0 Feb 26 11:02:25.413: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:26.426: INFO: Number of nodes with available pods: 0 Feb 26 11:02:26.426: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:27.443: INFO: Number of nodes with available pods: 0 Feb 26 11:02:27.444: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:28.538: INFO: Number of nodes with available pods: 0 Feb 26 11:02:28.538: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:29.424: INFO: Number of nodes with available pods: 0 Feb 26 11:02:29.424: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:30.429: INFO: Number of nodes with available pods: 0 Feb 26 11:02:30.429: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:31.428: INFO: Number of nodes with available pods: 0 Feb 26 11:02:31.428: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:32.430: INFO: Number of nodes with available pods: 0 Feb 26 11:02:32.430: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:33.439: INFO: Number of nodes with available pods: 0 Feb 26 11:02:33.439: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:34.428: INFO: Number of nodes with available pods: 0 Feb 26 11:02:34.428: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:35.430: INFO: Number of nodes with available pods: 1 Feb 26 11:02:35.430: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 26 11:02:35.493: INFO: Number of nodes with available pods: 1 Feb 26 11:02:35.493: INFO: Number of running nodes: 0, number of available pods: 1 Feb 26 11:02:36.526: INFO: Number of nodes with available pods: 0 Feb 26 11:02:36.527: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 26 11:02:36.653: INFO: Number of nodes with available pods: 0 Feb 26 11:02:36.653: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:38.104: INFO: Number of nodes with available pods: 0 Feb 26 11:02:38.104: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:38.694: INFO: Number of nodes with available pods: 0 Feb 26 11:02:38.695: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:39.734: INFO: Number of nodes with available pods: 0 Feb 26 11:02:39.734: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:40.676: INFO: Number of nodes with available pods: 0 Feb 26 11:02:40.676: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:41.693: INFO: Number of nodes with available pods: 0 Feb 26 11:02:41.693: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:42.682: INFO: Number of nodes with available pods: 0 Feb 26 11:02:42.682: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:43.679: INFO: Number of nodes with available pods: 0 Feb 26 11:02:43.679: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:44.672: INFO: Number of nodes with available pods: 0 Feb 26 11:02:44.673: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:45.681: INFO: Number of nodes with available pods: 0 Feb 26 11:02:45.681: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:46.666: INFO: Number of nodes with available pods: 0 Feb 26 11:02:46.666: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:47.671: INFO: Number of nodes with available pods: 0 Feb 26 11:02:47.671: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:48.694: INFO: Number of nodes with available pods: 0 Feb 26 11:02:48.695: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:49.666: INFO: Number of nodes with available pods: 0 Feb 26 11:02:49.666: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:50.673: INFO: Number of nodes with available pods: 0 Feb 26 11:02:50.674: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:51.668: INFO: Number of nodes with available pods: 0 Feb 26 11:02:51.668: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:52.701: INFO: Number of nodes with available pods: 0 Feb 26 11:02:52.701: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:54.245: INFO: Number of nodes with available pods: 0 Feb 26 11:02:54.245: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:54.684: INFO: Number of nodes with available pods: 0 Feb 26 11:02:54.684: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:55.675: INFO: Number of nodes with available pods: 0 Feb 26 11:02:55.676: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:56.671: INFO: Number of nodes with available pods: 0 Feb 26 11:02:56.672: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:57.755: INFO: Number of nodes with available pods: 0 Feb 26 11:02:57.755: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:59.010: INFO: Number of nodes with available pods: 0 Feb 26 11:02:59.010: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:02:59.677: INFO: Number of nodes with available pods: 0 Feb 26 11:02:59.678: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 26 11:03:00.899: INFO: Number of nodes with available pods: 1 Feb 26 11:03:00.899: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-kcp6n, will wait for the garbage collector to delete the pods Feb 26 11:03:01.144: INFO: Deleting DaemonSet.extensions daemon-set took: 165.796358ms Feb 26 11:03:01.245: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.866688ms Feb 26 11:03:12.797: INFO: Number of nodes with available pods: 0 Feb 26 11:03:12.797: INFO: Number of running nodes: 0, number of available pods: 0 Feb 26 11:03:12.839: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-kcp6n/daemonsets","resourceVersion":"22968755"},"items":null} Feb 26 11:03:12.871: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-kcp6n/pods","resourceVersion":"22968755"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 11:03:12.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-kcp6n" for this suite. Feb 26 11:03:20.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 11:03:21.084: INFO: namespace: e2e-tests-daemonsets-kcp6n, resource: bindings, ignored listing per whitelist Feb 26 11:03:21.114: INFO: namespace e2e-tests-daemonsets-kcp6n deletion completed in 8.164480304s • [SLOW TEST:56.263 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 11:03:21.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Feb 26 11:03:21.396: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Feb 26 11:03:21.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wrjgq' Feb 26 11:03:22.026: INFO: stderr: "" Feb 26 11:03:22.027: INFO: stdout: "service/redis-slave created\n" Feb 26 11:03:22.028: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Feb 26 11:03:22.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wrjgq' Feb 26 11:03:22.712: INFO: stderr: "" Feb 26 11:03:22.712: INFO: stdout: "service/redis-master created\n" Feb 26 11:03:22.713: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 26 11:03:22.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wrjgq' Feb 26 11:03:23.220: INFO: stderr: "" Feb 26 11:03:23.220: INFO: stdout: "service/frontend created\n" Feb 26 11:03:23.222: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Feb 26 11:03:23.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wrjgq' Feb 26 11:03:23.737: INFO: stderr: "" Feb 26 11:03:23.737: INFO: stdout: "deployment.extensions/frontend created\n" Feb 26 11:03:23.738: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 26 11:03:23.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wrjgq' Feb 26 11:03:24.310: INFO: stderr: "" Feb 26 11:03:24.310: INFO: stdout: "deployment.extensions/redis-master created\n" Feb 26 11:03:24.312: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Feb 26 11:03:24.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wrjgq' Feb 26 11:03:24.957: INFO: stderr: "" Feb 26 11:03:24.958: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Feb 26 11:03:24.958: INFO: Waiting for all frontend pods to be Running. Feb 26 11:03:55.012: INFO: Waiting for frontend to serve content. Feb 26 11:03:55.511: INFO: Trying to add a new entry to the guestbook. Feb 26 11:03:55.553: INFO: Verifying that added entry can be retrieved. Feb 26 11:03:55.610: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources Feb 26 11:04:00.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wrjgq' Feb 26 11:04:00.993: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 26 11:04:00.994: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Feb 26 11:04:00.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wrjgq' Feb 26 11:04:01.401: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 26 11:04:01.401: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 26 11:04:01.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wrjgq' Feb 26 11:04:01.556: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 26 11:04:01.556: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 26 11:04:01.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wrjgq' Feb 26 11:04:01.719: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 26 11:04:01.719: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 26 11:04:01.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wrjgq' Feb 26 11:04:01.952: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 26 11:04:01.953: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 26 11:04:01.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wrjgq' Feb 26 11:04:02.408: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 26 11:04:02.409: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 11:04:02.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wrjgq" for this suite. Feb 26 11:04:56.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 11:04:56.774: INFO: namespace: e2e-tests-kubectl-wrjgq, resource: bindings, ignored listing per whitelist Feb 26 11:04:56.801: INFO: namespace e2e-tests-kubectl-wrjgq deletion completed in 54.306129979s • [SLOW TEST:95.687 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 11:04:56.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0226 11:05:13.313241 9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 26 11:05:13.313: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 11:05:13.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-zj2g5" for this suite. Feb 26 11:05:35.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 11:05:36.049: INFO: namespace: e2e-tests-gc-zj2g5, resource: bindings, ignored listing per whitelist Feb 26 11:05:36.115: INFO: namespace e2e-tests-gc-zj2g5 deletion completed in 22.795102419s • [SLOW TEST:39.313 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 11:05:36.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 11:05:43.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-cx94g" for this suite. Feb 26 11:05:49.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 11:05:49.267: INFO: namespace: e2e-tests-namespaces-cx94g, resource: bindings, ignored listing per whitelist Feb 26 11:05:49.350: INFO: namespace e2e-tests-namespaces-cx94g deletion completed in 6.245705075s STEP: Destroying namespace "e2e-tests-nsdeletetest-v8hlj" for this suite. Feb 26 11:05:49.354: INFO: Namespace e2e-tests-nsdeletetest-v8hlj was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-5mc56" for this suite. Feb 26 11:05:55.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 11:05:55.531: INFO: namespace: e2e-tests-nsdeletetest-5mc56, resource: bindings, ignored listing per whitelist Feb 26 11:05:55.565: INFO: namespace e2e-tests-nsdeletetest-5mc56 deletion completed in 6.211279606s • [SLOW TEST:19.450 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 11:05:55.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wtslf A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-wtslf;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wtslf A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-wtslf;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wtslf.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-wtslf.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wtslf.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-wtslf.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wtslf.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wtslf.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wtslf.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wtslf.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wtslf.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-wtslf.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wtslf.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-wtslf.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wtslf.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 202.171.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.171.202_udp@PTR;check="$$(dig +tcp +noall +answer +search 202.171.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.171.202_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wtslf A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-wtslf;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wtslf A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-wtslf;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wtslf.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-wtslf.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wtslf.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-wtslf.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wtslf.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wtslf.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wtslf.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wtslf.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wtslf.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-wtslf.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wtslf.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-wtslf.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wtslf.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 202.171.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.171.202_udp@PTR;check="$$(dig +tcp +noall +answer +search 202.171.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.171.202_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 26 11:06:12.675: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wtslf.svc from pod e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-f5ed3d34-5887-11ea-8134-0242ac110008) Feb 26 11:06:12.852: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-wtslf.svc from pod e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-f5ed3d34-5887-11ea-8134-0242ac110008) Feb 26 11:06:12.959: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-wtslf.svc from pod e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-f5ed3d34-5887-11ea-8134-0242ac110008) Feb 26 11:06:12.985: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-f5ed3d34-5887-11ea-8134-0242ac110008) Feb 26 11:06:13.071: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-f5ed3d34-5887-11ea-8134-0242ac110008) Feb 26 11:06:13.215: INFO: Unable to read 10.106.171.202_udp@PTR from pod e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-f5ed3d34-5887-11ea-8134-0242ac110008) Feb 26 11:06:13.248: INFO: Unable to read 10.106.171.202_tcp@PTR from pod e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-f5ed3d34-5887-11ea-8134-0242ac110008) Feb 26 11:06:13.315: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-f5ed3d34-5887-11ea-8134-0242ac110008) Feb 26 11:06:13.382: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-f5ed3d34-5887-11ea-8134-0242ac110008) Feb 26 11:06:13.419: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wtslf from pod e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-f5ed3d34-5887-11ea-8134-0242ac110008) Feb 26 11:06:13.434: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wtslf from pod e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-f5ed3d34-5887-11ea-8134-0242ac110008) Feb 26 11:06:13.540: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wtslf.svc from pod e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-f5ed3d34-5887-11ea-8134-0242ac110008) Feb 26 11:06:13.583: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wtslf.svc from pod e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-f5ed3d34-5887-11ea-8134-0242ac110008) Feb 26 11:06:13.603: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wtslf.svc from pod e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-f5ed3d34-5887-11ea-8134-0242ac110008) Feb 26 11:06:13.644: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wtslf.svc from pod e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-f5ed3d34-5887-11ea-8134-0242ac110008) Feb 26 11:06:13.654: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-wtslf.svc from pod e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-f5ed3d34-5887-11ea-8134-0242ac110008) Feb 26 11:06:13.660: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-wtslf.svc from pod e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-f5ed3d34-5887-11ea-8134-0242ac110008) Feb 26 11:06:13.665: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-f5ed3d34-5887-11ea-8134-0242ac110008) Feb 26 11:06:13.670: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-f5ed3d34-5887-11ea-8134-0242ac110008) Feb 26 11:06:13.676: INFO: Lookups using e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008 failed for: [wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wtslf.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-wtslf.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-wtslf.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.106.171.202_udp@PTR 10.106.171.202_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wtslf jessie_tcp@dns-test-service.e2e-tests-dns-wtslf jessie_udp@dns-test-service.e2e-tests-dns-wtslf.svc jessie_tcp@dns-test-service.e2e-tests-dns-wtslf.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wtslf.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wtslf.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-wtslf.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-wtslf.svc jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 26 11:06:19.360: INFO: DNS probes using e2e-tests-dns-wtslf/dns-test-f5ed3d34-5887-11ea-8134-0242ac110008 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 11:06:19.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-wtslf" for this suite. Feb 26 11:06:28.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 11:06:28.794: INFO: namespace: e2e-tests-dns-wtslf, resource: bindings, ignored listing per whitelist Feb 26 11:06:28.963: INFO: namespace e2e-tests-dns-wtslf deletion completed in 9.026117784s • [SLOW TEST:33.397 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 11:06:28.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-09d706d7-5888-11ea-8134-0242ac110008 STEP: Creating a pod to test consume secrets Feb 26 11:06:29.364: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-09d8089d-5888-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-tqpdw" to be "success or failure" Feb 26 11:06:29.409: INFO: Pod "pod-projected-secrets-09d8089d-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 43.881592ms Feb 26 11:06:31.462: INFO: Pod "pod-projected-secrets-09d8089d-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096875346s Feb 26 11:06:33.499: INFO: Pod "pod-projected-secrets-09d8089d-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134349448s Feb 26 11:06:35.554: INFO: Pod "pod-projected-secrets-09d8089d-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.189102316s Feb 26 11:06:37.575: INFO: Pod "pod-projected-secrets-09d8089d-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.210365798s Feb 26 11:06:39.584: INFO: Pod "pod-projected-secrets-09d8089d-5888-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.219431438s STEP: Saw pod success Feb 26 11:06:39.584: INFO: Pod "pod-projected-secrets-09d8089d-5888-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 11:06:39.587: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-09d8089d-5888-11ea-8134-0242ac110008 container projected-secret-volume-test: STEP: delete the pod Feb 26 11:06:40.492: INFO: Waiting for pod pod-projected-secrets-09d8089d-5888-11ea-8134-0242ac110008 to disappear Feb 26 11:06:40.512: INFO: Pod pod-projected-secrets-09d8089d-5888-11ea-8134-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 11:06:40.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tqpdw" for this suite. Feb 26 11:06:46.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 11:06:46.928: INFO: namespace: e2e-tests-projected-tqpdw, resource: bindings, ignored listing per whitelist Feb 26 11:06:46.951: INFO: namespace e2e-tests-projected-tqpdw deletion completed in 6.386199984s • [SLOW TEST:17.988 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 11:06:46.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 26 11:06:47.541: INFO: Waiting up to 5m0s for pod "downwardapi-volume-14c176a3-5888-11ea-8134-0242ac110008" in namespace "e2e-tests-downward-api-cncgv" to be "success or failure" Feb 26 11:06:47.555: INFO: Pod "downwardapi-volume-14c176a3-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.757596ms Feb 26 11:06:50.260: INFO: Pod "downwardapi-volume-14c176a3-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.719115182s Feb 26 11:06:52.284: INFO: Pod "downwardapi-volume-14c176a3-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.743342957s Feb 26 11:06:54.841: INFO: Pod "downwardapi-volume-14c176a3-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.300361358s Feb 26 11:06:56.906: INFO: Pod "downwardapi-volume-14c176a3-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.365631271s Feb 26 11:06:58.934: INFO: Pod "downwardapi-volume-14c176a3-5888-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.392707277s STEP: Saw pod success Feb 26 11:06:58.934: INFO: Pod "downwardapi-volume-14c176a3-5888-11ea-8134-0242ac110008" satisfied condition "success or failure" Feb 26 11:06:58.946: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-14c176a3-5888-11ea-8134-0242ac110008 container client-container: STEP: delete the pod Feb 26 11:06:59.212: INFO: Waiting for pod downwardapi-volume-14c176a3-5888-11ea-8134-0242ac110008 to disappear Feb 26 11:06:59.228: INFO: Pod downwardapi-volume-14c176a3-5888-11ea-8134-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 26 11:06:59.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cncgv" for this suite. Feb 26 11:07:05.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 26 11:07:05.472: INFO: namespace: e2e-tests-downward-api-cncgv, resource: bindings, ignored listing per whitelist Feb 26 11:07:05.555: INFO: namespace e2e-tests-downward-api-cncgv deletion completed in 6.315996082s • [SLOW TEST:18.604 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 26 11:07:05.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 26 11:07:05.796: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 11.343229ms)
Feb 26 11:07:05.801: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.228212ms)
Feb 26 11:07:05.807: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.069448ms)
Feb 26 11:07:05.814: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.514355ms)
Feb 26 11:07:05.834: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.716162ms)
Feb 26 11:07:05.971: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 136.419611ms)
Feb 26 11:07:05.980: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.67048ms)
Feb 26 11:07:05.993: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.570927ms)
Feb 26 11:07:06.003: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.871315ms)
Feb 26 11:07:06.013: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.9714ms)
Feb 26 11:07:06.022: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.704139ms)
Feb 26 11:07:06.035: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.680927ms)
Feb 26 11:07:06.049: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.189394ms)
Feb 26 11:07:06.154: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 104.826741ms)
Feb 26 11:07:06.171: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.843172ms)
Feb 26 11:07:06.178: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.01606ms)
Feb 26 11:07:06.183: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.28977ms)
Feb 26 11:07:06.189: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.096122ms)
Feb 26 11:07:06.194: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.69489ms)
Feb 26 11:07:06.200: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.126357ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:07:06.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-8qvbn" for this suite.
Feb 26 11:07:12.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:07:12.408: INFO: namespace: e2e-tests-proxy-8qvbn, resource: bindings, ignored listing per whitelist
Feb 26 11:07:12.454: INFO: namespace e2e-tests-proxy-8qvbn deletion completed in 6.248837571s

• [SLOW TEST:6.898 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:07:12.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 26 11:07:12.917: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"23dfe0f7-5888-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00125798a), BlockOwnerDeletion:(*bool)(0xc00125798b)}}
Feb 26 11:07:12.941: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"23c5bdfd-5888-11ea-a994-fa163e34d433", Controller:(*bool)(0xc000d1e19a), BlockOwnerDeletion:(*bool)(0xc000d1e19b)}}
Feb 26 11:07:12.955: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"23d6c5d6-5888-11ea-a994-fa163e34d433", Controller:(*bool)(0xc000f29f1a), BlockOwnerDeletion:(*bool)(0xc000f29f1b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:07:17.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-cf82t" for this suite.
Feb 26 11:07:24.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:07:24.264: INFO: namespace: e2e-tests-gc-cf82t, resource: bindings, ignored listing per whitelist
Feb 26 11:07:24.298: INFO: namespace e2e-tests-gc-cf82t deletion completed in 6.29176883s

• [SLOW TEST:11.844 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:07:24.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 26 11:07:24.523: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2ac917b6-5888-11ea-8134-0242ac110008" in namespace "e2e-tests-downward-api-h5lrp" to be "success or failure"
Feb 26 11:07:24.679: INFO: Pod "downwardapi-volume-2ac917b6-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 156.270942ms
Feb 26 11:07:26.690: INFO: Pod "downwardapi-volume-2ac917b6-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166919202s
Feb 26 11:07:28.711: INFO: Pod "downwardapi-volume-2ac917b6-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187925196s
Feb 26 11:07:30.720: INFO: Pod "downwardapi-volume-2ac917b6-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196665549s
Feb 26 11:07:32.737: INFO: Pod "downwardapi-volume-2ac917b6-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.214408351s
Feb 26 11:07:34.745: INFO: Pod "downwardapi-volume-2ac917b6-5888-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.22229257s
STEP: Saw pod success
Feb 26 11:07:34.746: INFO: Pod "downwardapi-volume-2ac917b6-5888-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 11:07:35.460: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2ac917b6-5888-11ea-8134-0242ac110008 container client-container: 
STEP: delete the pod
Feb 26 11:07:35.840: INFO: Waiting for pod downwardapi-volume-2ac917b6-5888-11ea-8134-0242ac110008 to disappear
Feb 26 11:07:35.914: INFO: Pod downwardapi-volume-2ac917b6-5888-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:07:35.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-h5lrp" for this suite.
Feb 26 11:07:42.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:07:42.146: INFO: namespace: e2e-tests-downward-api-h5lrp, resource: bindings, ignored listing per whitelist
Feb 26 11:07:42.247: INFO: namespace e2e-tests-downward-api-h5lrp deletion completed in 6.291289844s

• [SLOW TEST:17.949 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:07:42.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 26 11:07:42.470: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nqvjg,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqvjg/configmaps/e2e-watch-test-configmap-a,UID:357f52d4-5888-11ea-a994-fa163e34d433,ResourceVersion:22969595,Generation:0,CreationTimestamp:2020-02-26 11:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 26 11:07:42.471: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nqvjg,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqvjg/configmaps/e2e-watch-test-configmap-a,UID:357f52d4-5888-11ea-a994-fa163e34d433,ResourceVersion:22969595,Generation:0,CreationTimestamp:2020-02-26 11:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 26 11:07:52.717: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nqvjg,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqvjg/configmaps/e2e-watch-test-configmap-a,UID:357f52d4-5888-11ea-a994-fa163e34d433,ResourceVersion:22969609,Generation:0,CreationTimestamp:2020-02-26 11:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 26 11:07:52.719: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nqvjg,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqvjg/configmaps/e2e-watch-test-configmap-a,UID:357f52d4-5888-11ea-a994-fa163e34d433,ResourceVersion:22969609,Generation:0,CreationTimestamp:2020-02-26 11:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 26 11:08:02.759: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nqvjg,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqvjg/configmaps/e2e-watch-test-configmap-a,UID:357f52d4-5888-11ea-a994-fa163e34d433,ResourceVersion:22969621,Generation:0,CreationTimestamp:2020-02-26 11:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 26 11:08:02.761: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nqvjg,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqvjg/configmaps/e2e-watch-test-configmap-a,UID:357f52d4-5888-11ea-a994-fa163e34d433,ResourceVersion:22969621,Generation:0,CreationTimestamp:2020-02-26 11:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 26 11:08:12.801: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nqvjg,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqvjg/configmaps/e2e-watch-test-configmap-a,UID:357f52d4-5888-11ea-a994-fa163e34d433,ResourceVersion:22969633,Generation:0,CreationTimestamp:2020-02-26 11:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 26 11:08:12.802: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nqvjg,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqvjg/configmaps/e2e-watch-test-configmap-a,UID:357f52d4-5888-11ea-a994-fa163e34d433,ResourceVersion:22969633,Generation:0,CreationTimestamp:2020-02-26 11:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 26 11:08:22.849: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-nqvjg,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqvjg/configmaps/e2e-watch-test-configmap-b,UID:4d8d2fe3-5888-11ea-a994-fa163e34d433,ResourceVersion:22969646,Generation:0,CreationTimestamp:2020-02-26 11:08:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 26 11:08:22.855: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-nqvjg,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqvjg/configmaps/e2e-watch-test-configmap-b,UID:4d8d2fe3-5888-11ea-a994-fa163e34d433,ResourceVersion:22969646,Generation:0,CreationTimestamp:2020-02-26 11:08:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 26 11:08:32.945: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-nqvjg,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqvjg/configmaps/e2e-watch-test-configmap-b,UID:4d8d2fe3-5888-11ea-a994-fa163e34d433,ResourceVersion:22969659,Generation:0,CreationTimestamp:2020-02-26 11:08:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 26 11:08:32.946: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-nqvjg,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqvjg/configmaps/e2e-watch-test-configmap-b,UID:4d8d2fe3-5888-11ea-a994-fa163e34d433,ResourceVersion:22969659,Generation:0,CreationTimestamp:2020-02-26 11:08:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:08:42.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-nqvjg" for this suite.
Feb 26 11:08:48.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:08:49.116: INFO: namespace: e2e-tests-watch-nqvjg, resource: bindings, ignored listing per whitelist
Feb 26 11:08:49.124: INFO: namespace e2e-tests-watch-nqvjg deletion completed in 6.158163044s

• [SLOW TEST:66.877 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:08:49.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-g929k
Feb 26 11:08:59.467: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-g929k
STEP: checking the pod's current state and verifying that restartCount is present
Feb 26 11:08:59.522: INFO: Initial restart count of pod liveness-http is 0
Feb 26 11:09:20.326: INFO: Restart count of pod e2e-tests-container-probe-g929k/liveness-http is now 1 (20.803071934s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:09:20.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-g929k" for this suite.
Feb 26 11:09:26.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:09:26.681: INFO: namespace: e2e-tests-container-probe-g929k, resource: bindings, ignored listing per whitelist
Feb 26 11:09:26.714: INFO: namespace e2e-tests-container-probe-g929k deletion completed in 6.319157388s

• [SLOW TEST:37.589 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:09:26.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-73c1969d-5888-11ea-8134-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 26 11:09:27.015: INFO: Waiting up to 5m0s for pod "pod-configmaps-73c2b053-5888-11ea-8134-0242ac110008" in namespace "e2e-tests-configmap-bmcm6" to be "success or failure"
Feb 26 11:09:27.040: INFO: Pod "pod-configmaps-73c2b053-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 24.352051ms
Feb 26 11:09:29.548: INFO: Pod "pod-configmaps-73c2b053-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.532014063s
Feb 26 11:09:31.557: INFO: Pod "pod-configmaps-73c2b053-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.541058042s
Feb 26 11:09:33.571: INFO: Pod "pod-configmaps-73c2b053-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.555104567s
Feb 26 11:09:35.586: INFO: Pod "pod-configmaps-73c2b053-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.570081408s
Feb 26 11:09:37.605: INFO: Pod "pod-configmaps-73c2b053-5888-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.589858479s
STEP: Saw pod success
Feb 26 11:09:37.606: INFO: Pod "pod-configmaps-73c2b053-5888-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 11:09:37.609: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-73c2b053-5888-11ea-8134-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 26 11:09:37.675: INFO: Waiting for pod pod-configmaps-73c2b053-5888-11ea-8134-0242ac110008 to disappear
Feb 26 11:09:37.684: INFO: Pod pod-configmaps-73c2b053-5888-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:09:37.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-bmcm6" for this suite.
Feb 26 11:09:43.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:09:43.980: INFO: namespace: e2e-tests-configmap-bmcm6, resource: bindings, ignored listing per whitelist
Feb 26 11:09:44.175: INFO: namespace e2e-tests-configmap-bmcm6 deletion completed in 6.472912401s

• [SLOW TEST:17.461 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:09:44.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 26 11:09:44.434: INFO: Waiting up to 5m0s for pod "pod-7e3001a9-5888-11ea-8134-0242ac110008" in namespace "e2e-tests-emptydir-wltfn" to be "success or failure"
Feb 26 11:09:44.484: INFO: Pod "pod-7e3001a9-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 49.86153ms
Feb 26 11:09:46.575: INFO: Pod "pod-7e3001a9-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141157092s
Feb 26 11:09:48.619: INFO: Pod "pod-7e3001a9-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185045086s
Feb 26 11:09:50.646: INFO: Pod "pod-7e3001a9-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212509734s
Feb 26 11:09:52.667: INFO: Pod "pod-7e3001a9-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.233439095s
Feb 26 11:09:54.690: INFO: Pod "pod-7e3001a9-5888-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.255878998s
STEP: Saw pod success
Feb 26 11:09:54.690: INFO: Pod "pod-7e3001a9-5888-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 11:09:54.701: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7e3001a9-5888-11ea-8134-0242ac110008 container test-container: 
STEP: delete the pod
Feb 26 11:09:54.974: INFO: Waiting for pod pod-7e3001a9-5888-11ea-8134-0242ac110008 to disappear
Feb 26 11:09:54.985: INFO: Pod pod-7e3001a9-5888-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:09:54.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wltfn" for this suite.
Feb 26 11:10:03.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:10:03.102: INFO: namespace: e2e-tests-emptydir-wltfn, resource: bindings, ignored listing per whitelist
Feb 26 11:10:03.250: INFO: namespace e2e-tests-emptydir-wltfn deletion completed in 8.255063133s

• [SLOW TEST:19.073 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:10:03.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-898ee886-5888-11ea-8134-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 26 11:10:03.508: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-898fcee5-5888-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-mr2vd" to be "success or failure"
Feb 26 11:10:03.512: INFO: Pod "pod-projected-configmaps-898fcee5-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213258ms
Feb 26 11:10:05.592: INFO: Pod "pod-projected-configmaps-898fcee5-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084208954s
Feb 26 11:10:07.603: INFO: Pod "pod-projected-configmaps-898fcee5-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095699442s
Feb 26 11:10:09.620: INFO: Pod "pod-projected-configmaps-898fcee5-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11230562s
Feb 26 11:10:11.640: INFO: Pod "pod-projected-configmaps-898fcee5-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.131761703s
Feb 26 11:10:13.655: INFO: Pod "pod-projected-configmaps-898fcee5-5888-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.146881396s
STEP: Saw pod success
Feb 26 11:10:13.655: INFO: Pod "pod-projected-configmaps-898fcee5-5888-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 11:10:13.660: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-898fcee5-5888-11ea-8134-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 26 11:10:14.430: INFO: Waiting for pod pod-projected-configmaps-898fcee5-5888-11ea-8134-0242ac110008 to disappear
Feb 26 11:10:14.440: INFO: Pod pod-projected-configmaps-898fcee5-5888-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:10:14.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mr2vd" for this suite.
Feb 26 11:10:20.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:10:20.687: INFO: namespace: e2e-tests-projected-mr2vd, resource: bindings, ignored listing per whitelist
Feb 26 11:10:20.712: INFO: namespace e2e-tests-projected-mr2vd deletion completed in 6.262159608s

• [SLOW TEST:17.462 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:10:20.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 26 11:10:20.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Feb 26 11:10:20.935: INFO: stderr: ""
Feb 26 11:10:20.935: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Feb 26 11:10:20.944: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:10:20.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dqhf4" for this suite.
Feb 26 11:10:26.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:10:27.038: INFO: namespace: e2e-tests-kubectl-dqhf4, resource: bindings, ignored listing per whitelist
Feb 26 11:10:27.190: INFO: namespace e2e-tests-kubectl-dqhf4 deletion completed in 6.234824714s

S [SKIPPING] [6.479 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Feb 26 11:10:20.944: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:10:27.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-h5j94
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 26 11:10:27.482: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 26 11:11:03.960: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-h5j94 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 26 11:11:03.961: INFO: >>> kubeConfig: /root/.kube/config
I0226 11:11:04.103589       9 log.go:172] (0xc0000dcd10) (0xc002351040) Create stream
I0226 11:11:04.103734       9 log.go:172] (0xc0000dcd10) (0xc002351040) Stream added, broadcasting: 1
I0226 11:11:04.108426       9 log.go:172] (0xc0000dcd10) Reply frame received for 1
I0226 11:11:04.108458       9 log.go:172] (0xc0000dcd10) (0xc0023c6960) Create stream
I0226 11:11:04.108466       9 log.go:172] (0xc0000dcd10) (0xc0023c6960) Stream added, broadcasting: 3
I0226 11:11:04.109593       9 log.go:172] (0xc0000dcd10) Reply frame received for 3
I0226 11:11:04.109614       9 log.go:172] (0xc0000dcd10) (0xc001a12000) Create stream
I0226 11:11:04.109630       9 log.go:172] (0xc0000dcd10) (0xc001a12000) Stream added, broadcasting: 5
I0226 11:11:04.110514       9 log.go:172] (0xc0000dcd10) Reply frame received for 5
I0226 11:11:04.342060       9 log.go:172] (0xc0000dcd10) Data frame received for 3
I0226 11:11:04.342174       9 log.go:172] (0xc0023c6960) (3) Data frame handling
I0226 11:11:04.342201       9 log.go:172] (0xc0023c6960) (3) Data frame sent
I0226 11:11:04.513595       9 log.go:172] (0xc0000dcd10) Data frame received for 1
I0226 11:11:04.513872       9 log.go:172] (0xc002351040) (1) Data frame handling
I0226 11:11:04.513931       9 log.go:172] (0xc002351040) (1) Data frame sent
I0226 11:11:04.513967       9 log.go:172] (0xc0000dcd10) (0xc002351040) Stream removed, broadcasting: 1
I0226 11:11:04.514762       9 log.go:172] (0xc0000dcd10) (0xc0023c6960) Stream removed, broadcasting: 3
I0226 11:11:04.514883       9 log.go:172] (0xc0000dcd10) (0xc001a12000) Stream removed, broadcasting: 5
I0226 11:11:04.514925       9 log.go:172] (0xc0000dcd10) (0xc002351040) Stream removed, broadcasting: 1
I0226 11:11:04.514931       9 log.go:172] (0xc0000dcd10) (0xc0023c6960) Stream removed, broadcasting: 3
I0226 11:11:04.514934       9 log.go:172] (0xc0000dcd10) (0xc001a12000) Stream removed, broadcasting: 5
Feb 26 11:11:04.515: INFO: Found all expected endpoints: [netserver-0]
I0226 11:11:04.515846       9 log.go:172] (0xc0000dcd10) Go away received
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:11:04.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-h5j94" for this suite.
Feb 26 11:11:30.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:11:30.721: INFO: namespace: e2e-tests-pod-network-test-h5j94, resource: bindings, ignored listing per whitelist
Feb 26 11:11:30.778: INFO: namespace e2e-tests-pod-network-test-h5j94 deletion completed in 26.223722204s

• [SLOW TEST:63.587 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:11:30.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-bdc86827-5888-11ea-8134-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 26 11:11:31.133: INFO: Waiting up to 5m0s for pod "pod-secrets-bdc9bbc1-5888-11ea-8134-0242ac110008" in namespace "e2e-tests-secrets-s86qc" to be "success or failure"
Feb 26 11:11:31.158: INFO: Pod "pod-secrets-bdc9bbc1-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 25.327985ms
Feb 26 11:11:33.173: INFO: Pod "pod-secrets-bdc9bbc1-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040063213s
Feb 26 11:11:35.205: INFO: Pod "pod-secrets-bdc9bbc1-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072277404s
Feb 26 11:11:37.932: INFO: Pod "pod-secrets-bdc9bbc1-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.799458357s
Feb 26 11:11:39.951: INFO: Pod "pod-secrets-bdc9bbc1-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.818417466s
Feb 26 11:11:42.253: INFO: Pod "pod-secrets-bdc9bbc1-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.120665423s
Feb 26 11:11:44.271: INFO: Pod "pod-secrets-bdc9bbc1-5888-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.137783923s
STEP: Saw pod success
Feb 26 11:11:44.271: INFO: Pod "pod-secrets-bdc9bbc1-5888-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 11:11:44.277: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-bdc9bbc1-5888-11ea-8134-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 26 11:11:44.644: INFO: Waiting for pod pod-secrets-bdc9bbc1-5888-11ea-8134-0242ac110008 to disappear
Feb 26 11:11:44.662: INFO: Pod pod-secrets-bdc9bbc1-5888-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:11:44.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-s86qc" for this suite.
Feb 26 11:11:50.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:11:51.148: INFO: namespace: e2e-tests-secrets-s86qc, resource: bindings, ignored listing per whitelist
Feb 26 11:11:51.161: INFO: namespace e2e-tests-secrets-s86qc deletion completed in 6.389874843s

• [SLOW TEST:20.383 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:11:51.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-c9d76ae0-5888-11ea-8134-0242ac110008
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-c9d76ae0-5888-11ea-8134-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:12:01.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b9jv2" for this suite.
Feb 26 11:12:25.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:12:25.632: INFO: namespace: e2e-tests-projected-b9jv2, resource: bindings, ignored listing per whitelist
Feb 26 11:12:25.841: INFO: namespace e2e-tests-projected-b9jv2 deletion completed in 24.290447966s

• [SLOW TEST:34.679 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:12:25.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: Gathering metrics
W0226 11:12:29.558960       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 26 11:12:29.559: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:12:29.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-89bqm" for this suite.
Feb 26 11:12:37.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:12:37.803: INFO: namespace: e2e-tests-gc-89bqm, resource: bindings, ignored listing per whitelist
Feb 26 11:12:37.866: INFO: namespace e2e-tests-gc-89bqm deletion completed in 8.296611001s

• [SLOW TEST:12.025 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:12:37.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 26 11:12:38.090: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5b01b60-5888-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-wvv4b" to be "success or failure"
Feb 26 11:12:38.099: INFO: Pod "downwardapi-volume-e5b01b60-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.848764ms
Feb 26 11:12:40.204: INFO: Pod "downwardapi-volume-e5b01b60-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11418992s
Feb 26 11:12:42.220: INFO: Pod "downwardapi-volume-e5b01b60-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12995932s
Feb 26 11:12:44.241: INFO: Pod "downwardapi-volume-e5b01b60-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151224959s
Feb 26 11:12:46.623: INFO: Pod "downwardapi-volume-e5b01b60-5888-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.533525154s
Feb 26 11:12:48.640: INFO: Pod "downwardapi-volume-e5b01b60-5888-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.549978729s
STEP: Saw pod success
Feb 26 11:12:48.640: INFO: Pod "downwardapi-volume-e5b01b60-5888-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 11:12:48.643: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e5b01b60-5888-11ea-8134-0242ac110008 container client-container: 
STEP: delete the pod
Feb 26 11:12:50.027: INFO: Waiting for pod downwardapi-volume-e5b01b60-5888-11ea-8134-0242ac110008 to disappear
Feb 26 11:12:50.045: INFO: Pod downwardapi-volume-e5b01b60-5888-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:12:50.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wvv4b" for this suite.
Feb 26 11:12:56.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:12:56.274: INFO: namespace: e2e-tests-projected-wvv4b, resource: bindings, ignored listing per whitelist
Feb 26 11:12:56.350: INFO: namespace e2e-tests-projected-wvv4b deletion completed in 6.27122203s

• [SLOW TEST:18.483 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:12:56.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-p7dnt
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 26 11:12:56.690: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 26 11:13:28.920: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-p7dnt PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 26 11:13:28.920: INFO: >>> kubeConfig: /root/.kube/config
I0226 11:13:29.028452       9 log.go:172] (0xc000d35970) (0xc002063180) Create stream
I0226 11:13:29.028639       9 log.go:172] (0xc000d35970) (0xc002063180) Stream added, broadcasting: 1
I0226 11:13:29.052389       9 log.go:172] (0xc000d35970) Reply frame received for 1
I0226 11:13:29.052473       9 log.go:172] (0xc000d35970) (0xc001ea0640) Create stream
I0226 11:13:29.052484       9 log.go:172] (0xc000d35970) (0xc001ea0640) Stream added, broadcasting: 3
I0226 11:13:29.055600       9 log.go:172] (0xc000d35970) Reply frame received for 3
I0226 11:13:29.055641       9 log.go:172] (0xc000d35970) (0xc000bbdea0) Create stream
I0226 11:13:29.055664       9 log.go:172] (0xc000d35970) (0xc000bbdea0) Stream added, broadcasting: 5
I0226 11:13:29.057732       9 log.go:172] (0xc000d35970) Reply frame received for 5
I0226 11:13:29.291993       9 log.go:172] (0xc000d35970) Data frame received for 3
I0226 11:13:29.292118       9 log.go:172] (0xc001ea0640) (3) Data frame handling
I0226 11:13:29.292147       9 log.go:172] (0xc001ea0640) (3) Data frame sent
I0226 11:13:29.529294       9 log.go:172] (0xc000d35970) Data frame received for 1
I0226 11:13:29.529841       9 log.go:172] (0xc000d35970) (0xc001ea0640) Stream removed, broadcasting: 3
I0226 11:13:29.530167       9 log.go:172] (0xc002063180) (1) Data frame handling
I0226 11:13:29.530289       9 log.go:172] (0xc002063180) (1) Data frame sent
I0226 11:13:29.530301       9 log.go:172] (0xc000d35970) (0xc002063180) Stream removed, broadcasting: 1
I0226 11:13:29.531600       9 log.go:172] (0xc000d35970) (0xc000bbdea0) Stream removed, broadcasting: 5
I0226 11:13:29.531693       9 log.go:172] (0xc000d35970) (0xc002063180) Stream removed, broadcasting: 1
I0226 11:13:29.531710       9 log.go:172] (0xc000d35970) (0xc001ea0640) Stream removed, broadcasting: 3
I0226 11:13:29.531723       9 log.go:172] (0xc000d35970) (0xc000bbdea0) Stream removed, broadcasting: 5
Feb 26 11:13:29.533: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:13:29.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0226 11:13:29.534474       9 log.go:172] (0xc000d35970) Go away received
STEP: Destroying namespace "e2e-tests-pod-network-test-p7dnt" for this suite.
Feb 26 11:13:55.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:13:56.076: INFO: namespace: e2e-tests-pod-network-test-p7dnt, resource: bindings, ignored listing per whitelist
Feb 26 11:13:56.110: INFO: namespace e2e-tests-pod-network-test-p7dnt deletion completed in 26.462570772s

• [SLOW TEST:59.759 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:13:56.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 26 11:13:56.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-nvldl'
Feb 26 11:13:58.963: INFO: stderr: ""
Feb 26 11:13:58.964: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Feb 26 11:13:58.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-nvldl'
Feb 26 11:14:03.947: INFO: stderr: ""
Feb 26 11:14:03.947: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:14:03.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nvldl" for this suite.
Feb 26 11:14:10.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:14:10.223: INFO: namespace: e2e-tests-kubectl-nvldl, resource: bindings, ignored listing per whitelist
Feb 26 11:14:10.281: INFO: namespace e2e-tests-kubectl-nvldl deletion completed in 6.269224262s

• [SLOW TEST:14.171 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:14:10.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Feb 26 11:14:11.079: INFO: Waiting up to 5m0s for pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-4l5gl" in namespace "e2e-tests-svcaccounts-g8dxv" to be "success or failure"
Feb 26 11:14:11.253: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-4l5gl": Phase="Pending", Reason="", readiness=false. Elapsed: 173.942116ms
Feb 26 11:14:13.481: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-4l5gl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.402156663s
Feb 26 11:14:15.516: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-4l5gl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.437050623s
Feb 26 11:14:17.945: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-4l5gl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.865752175s
Feb 26 11:14:20.104: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-4l5gl": Phase="Pending", Reason="", readiness=false. Elapsed: 9.024939237s
Feb 26 11:14:22.908: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-4l5gl": Phase="Pending", Reason="", readiness=false. Elapsed: 11.829195119s
Feb 26 11:14:24.923: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-4l5gl": Phase="Pending", Reason="", readiness=false. Elapsed: 13.843697426s
Feb 26 11:14:26.959: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-4l5gl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.879470139s
STEP: Saw pod success
Feb 26 11:14:26.959: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-4l5gl" satisfied condition "success or failure"
Feb 26 11:14:26.968: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-4l5gl container token-test: 
STEP: delete the pod
Feb 26 11:14:27.953: INFO: Waiting for pod pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-4l5gl to disappear
Feb 26 11:14:28.032: INFO: Pod pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-4l5gl no longer exists
STEP: Creating a pod to test consume service account root CA
Feb 26 11:14:28.046: INFO: Waiting up to 5m0s for pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-j75fr" in namespace "e2e-tests-svcaccounts-g8dxv" to be "success or failure"
Feb 26 11:14:28.070: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-j75fr": Phase="Pending", Reason="", readiness=false. Elapsed: 23.880409ms
Feb 26 11:14:30.087: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-j75fr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041571108s
Feb 26 11:14:32.117: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-j75fr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07165807s
Feb 26 11:14:34.471: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-j75fr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425020836s
Feb 26 11:14:36.730: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-j75fr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.68403092s
Feb 26 11:14:38.771: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-j75fr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.725212046s
Feb 26 11:14:40.785: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-j75fr": Phase="Pending", Reason="", readiness=false. Elapsed: 12.739533156s
Feb 26 11:14:42.825: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-j75fr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.779321007s
STEP: Saw pod success
Feb 26 11:14:42.825: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-j75fr" satisfied condition "success or failure"
Feb 26 11:14:42.897: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-j75fr container root-ca-test: 
STEP: delete the pod
Feb 26 11:14:42.987: INFO: Waiting for pod pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-j75fr to disappear
Feb 26 11:14:42.997: INFO: Pod pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-j75fr no longer exists
STEP: Creating a pod to test consume service account namespace
Feb 26 11:14:43.883: INFO: Waiting up to 5m0s for pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-kjfd9" in namespace "e2e-tests-svcaccounts-g8dxv" to be "success or failure"
Feb 26 11:14:44.037: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-kjfd9": Phase="Pending", Reason="", readiness=false. Elapsed: 153.020246ms
Feb 26 11:14:46.050: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-kjfd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166063791s
Feb 26 11:14:48.082: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-kjfd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198476191s
Feb 26 11:14:50.095: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-kjfd9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21197126s
Feb 26 11:14:52.111: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-kjfd9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.227578718s
Feb 26 11:14:54.180: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-kjfd9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.296956929s
Feb 26 11:14:56.196: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-kjfd9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.312084795s
Feb 26 11:14:58.210: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-kjfd9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.326521443s
Feb 26 11:15:00.235: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-kjfd9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.351528744s
Feb 26 11:15:02.253: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-kjfd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.36956496s
STEP: Saw pod success
Feb 26 11:15:02.253: INFO: Pod "pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-kjfd9" satisfied condition "success or failure"
Feb 26 11:15:02.271: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-kjfd9 container namespace-test: 
STEP: delete the pod
Feb 26 11:15:02.615: INFO: Waiting for pod pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-kjfd9 to disappear
Feb 26 11:15:02.637: INFO: Pod pod-service-account-1d1ebe64-5889-11ea-8134-0242ac110008-kjfd9 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:15:02.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-g8dxv" for this suite.
Feb 26 11:15:12.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:15:12.874: INFO: namespace: e2e-tests-svcaccounts-g8dxv, resource: bindings, ignored listing per whitelist
Feb 26 11:15:12.888: INFO: namespace e2e-tests-svcaccounts-g8dxv deletion completed in 10.240470338s

• [SLOW TEST:62.606 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:15:12.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-422ebe3b-5889-11ea-8134-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 26 11:15:13.257: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-42300a8a-5889-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-7tvbk" to be "success or failure"
Feb 26 11:15:13.273: INFO: Pod "pod-projected-configmaps-42300a8a-5889-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.110372ms
Feb 26 11:15:15.463: INFO: Pod "pod-projected-configmaps-42300a8a-5889-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206517476s
Feb 26 11:15:17.496: INFO: Pod "pod-projected-configmaps-42300a8a-5889-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238633369s
Feb 26 11:15:19.569: INFO: Pod "pod-projected-configmaps-42300a8a-5889-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.311900847s
Feb 26 11:15:21.767: INFO: Pod "pod-projected-configmaps-42300a8a-5889-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.509737395s
Feb 26 11:15:23.803: INFO: Pod "pod-projected-configmaps-42300a8a-5889-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.545894819s
STEP: Saw pod success
Feb 26 11:15:23.803: INFO: Pod "pod-projected-configmaps-42300a8a-5889-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 11:15:23.811: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-42300a8a-5889-11ea-8134-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 26 11:15:24.276: INFO: Waiting for pod pod-projected-configmaps-42300a8a-5889-11ea-8134-0242ac110008 to disappear
Feb 26 11:15:24.302: INFO: Pod pod-projected-configmaps-42300a8a-5889-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:15:24.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7tvbk" for this suite.
Feb 26 11:15:30.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:15:30.534: INFO: namespace: e2e-tests-projected-7tvbk, resource: bindings, ignored listing per whitelist
Feb 26 11:15:30.617: INFO: namespace e2e-tests-projected-7tvbk deletion completed in 6.300607209s

• [SLOW TEST:17.729 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:15:30.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb 26 11:15:30.978: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9jmkl,SelfLink:/api/v1/namespaces/e2e-tests-watch-9jmkl/configmaps/e2e-watch-test-label-changed,UID:4cb818f6-5889-11ea-a994-fa163e34d433,ResourceVersion:22970626,Generation:0,CreationTimestamp:2020-02-26 11:15:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 26 11:15:30.979: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9jmkl,SelfLink:/api/v1/namespaces/e2e-tests-watch-9jmkl/configmaps/e2e-watch-test-label-changed,UID:4cb818f6-5889-11ea-a994-fa163e34d433,ResourceVersion:22970627,Generation:0,CreationTimestamp:2020-02-26 11:15:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 26 11:15:30.979: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9jmkl,SelfLink:/api/v1/namespaces/e2e-tests-watch-9jmkl/configmaps/e2e-watch-test-label-changed,UID:4cb818f6-5889-11ea-a994-fa163e34d433,ResourceVersion:22970628,Generation:0,CreationTimestamp:2020-02-26 11:15:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb 26 11:15:41.101: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9jmkl,SelfLink:/api/v1/namespaces/e2e-tests-watch-9jmkl/configmaps/e2e-watch-test-label-changed,UID:4cb818f6-5889-11ea-a994-fa163e34d433,ResourceVersion:22970642,Generation:0,CreationTimestamp:2020-02-26 11:15:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 26 11:15:41.102: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9jmkl,SelfLink:/api/v1/namespaces/e2e-tests-watch-9jmkl/configmaps/e2e-watch-test-label-changed,UID:4cb818f6-5889-11ea-a994-fa163e34d433,ResourceVersion:22970643,Generation:0,CreationTimestamp:2020-02-26 11:15:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb 26 11:15:41.102: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-9jmkl,SelfLink:/api/v1/namespaces/e2e-tests-watch-9jmkl/configmaps/e2e-watch-test-label-changed,UID:4cb818f6-5889-11ea-a994-fa163e34d433,ResourceVersion:22970644,Generation:0,CreationTimestamp:2020-02-26 11:15:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:15:41.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-9jmkl" for this suite.
Feb 26 11:15:47.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:15:47.473: INFO: namespace: e2e-tests-watch-9jmkl, resource: bindings, ignored listing per whitelist
Feb 26 11:15:47.510: INFO: namespace e2e-tests-watch-9jmkl deletion completed in 6.282756509s

• [SLOW TEST:16.892 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:15:47.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-56b9e802-5889-11ea-8134-0242ac110008
STEP: Creating configMap with name cm-test-opt-upd-56b9e90a-5889-11ea-8134-0242ac110008
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-56b9e802-5889-11ea-8134-0242ac110008
STEP: Updating configmap cm-test-opt-upd-56b9e90a-5889-11ea-8134-0242ac110008
STEP: Creating configMap with name cm-test-opt-create-56b9e95d-5889-11ea-8134-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:17:13.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-pkclk" for this suite.
Feb 26 11:17:37.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:17:37.303: INFO: namespace: e2e-tests-configmap-pkclk, resource: bindings, ignored listing per whitelist
Feb 26 11:17:37.303: INFO: namespace e2e-tests-configmap-pkclk deletion completed in 24.246866484s

• [SLOW TEST:109.793 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:17:37.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 26 11:17:37.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-hhp8m'
Feb 26 11:17:37.664: INFO: stderr: ""
Feb 26 11:17:37.665: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb 26 11:17:47.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-hhp8m -o json'
Feb 26 11:17:47.887: INFO: stderr: ""
Feb 26 11:17:47.887: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-26T11:17:37Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-hhp8m\",\n        \"resourceVersion\": \"22970851\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-hhp8m/pods/e2e-test-nginx-pod\",\n        \"uid\": \"983e1ad3-5889-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-ct9qp\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-ct9qp\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-ct9qp\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-26T11:17:37Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-26T11:17:46Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-26T11:17:46Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-26T11:17:37Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://712873849aa6bf73981d7b9e914ffa310343ab4308b8979b8924a24388d46cb6\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-26T11:17:45Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-26T11:17:37Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 26 11:17:47.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-hhp8m'
Feb 26 11:17:48.277: INFO: stderr: ""
Feb 26 11:17:48.277: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Feb 26 11:17:48.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-hhp8m'
Feb 26 11:17:56.611: INFO: stderr: ""
Feb 26 11:17:56.611: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:17:56.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hhp8m" for this suite.
Feb 26 11:18:02.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:18:03.002: INFO: namespace: e2e-tests-kubectl-hhp8m, resource: bindings, ignored listing per whitelist
Feb 26 11:18:03.105: INFO: namespace e2e-tests-kubectl-hhp8m deletion completed in 6.460512371s

• [SLOW TEST:25.801 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:18:03.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 26 11:18:03.301: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:18:19.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-vzkfd" for this suite.
Feb 26 11:18:28.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:18:28.219: INFO: namespace: e2e-tests-init-container-vzkfd, resource: bindings, ignored listing per whitelist
Feb 26 11:18:28.396: INFO: namespace e2e-tests-init-container-vzkfd deletion completed in 8.391899992s

• [SLOW TEST:25.291 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:18:28.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb 26 11:18:28.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hc8w9'
Feb 26 11:18:29.136: INFO: stderr: ""
Feb 26 11:18:29.137: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 26 11:18:30.150: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 11:18:30.151: INFO: Found 0 / 1
Feb 26 11:18:31.150: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 11:18:31.150: INFO: Found 0 / 1
Feb 26 11:18:32.153: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 11:18:32.153: INFO: Found 0 / 1
Feb 26 11:18:33.151: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 11:18:33.151: INFO: Found 0 / 1
Feb 26 11:18:34.156: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 11:18:34.157: INFO: Found 0 / 1
Feb 26 11:18:35.161: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 11:18:35.161: INFO: Found 0 / 1
Feb 26 11:18:36.153: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 11:18:36.153: INFO: Found 0 / 1
Feb 26 11:18:37.144: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 11:18:37.144: INFO: Found 0 / 1
Feb 26 11:18:38.179: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 11:18:38.179: INFO: Found 0 / 1
Feb 26 11:18:39.159: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 11:18:39.159: INFO: Found 1 / 1
Feb 26 11:18:39.159: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 26 11:18:39.165: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 11:18:39.165: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 26 11:18:39.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-w8p67 --namespace=e2e-tests-kubectl-hc8w9 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 26 11:18:39.365: INFO: stderr: ""
Feb 26 11:18:39.365: INFO: stdout: "pod/redis-master-w8p67 patched\n"
STEP: checking annotations
Feb 26 11:18:39.379: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 11:18:39.380: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:18:39.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hc8w9" for this suite.
Feb 26 11:19:03.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:19:03.517: INFO: namespace: e2e-tests-kubectl-hc8w9, resource: bindings, ignored listing per whitelist
Feb 26 11:19:03.769: INFO: namespace e2e-tests-kubectl-hc8w9 deletion completed in 24.381553876s

• [SLOW TEST:35.373 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:19:03.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-cbd3f6a6-5889-11ea-8134-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 26 11:19:04.238: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cbd61b0c-5889-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-d5kz8" to be "success or failure"
Feb 26 11:19:04.257: INFO: Pod "pod-projected-configmaps-cbd61b0c-5889-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 17.889927ms
Feb 26 11:19:06.275: INFO: Pod "pod-projected-configmaps-cbd61b0c-5889-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036372914s
Feb 26 11:19:08.298: INFO: Pod "pod-projected-configmaps-cbd61b0c-5889-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059596142s
Feb 26 11:19:10.905: INFO: Pod "pod-projected-configmaps-cbd61b0c-5889-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.666306617s
Feb 26 11:19:12.922: INFO: Pod "pod-projected-configmaps-cbd61b0c-5889-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.683595333s
Feb 26 11:19:14.953: INFO: Pod "pod-projected-configmaps-cbd61b0c-5889-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.714581549s
STEP: Saw pod success
Feb 26 11:19:14.954: INFO: Pod "pod-projected-configmaps-cbd61b0c-5889-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 11:19:14.966: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-cbd61b0c-5889-11ea-8134-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 26 11:19:15.221: INFO: Waiting for pod pod-projected-configmaps-cbd61b0c-5889-11ea-8134-0242ac110008 to disappear
Feb 26 11:19:15.229: INFO: Pod pod-projected-configmaps-cbd61b0c-5889-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:19:15.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-d5kz8" for this suite.
Feb 26 11:19:23.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:19:23.375: INFO: namespace: e2e-tests-projected-d5kz8, resource: bindings, ignored listing per whitelist
Feb 26 11:19:23.549: INFO: namespace e2e-tests-projected-d5kz8 deletion completed in 8.314024844s

• [SLOW TEST:19.779 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:19:23.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:19:24.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-cgf6f" for this suite.
Feb 26 11:19:48.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:19:48.309: INFO: namespace: e2e-tests-pods-cgf6f, resource: bindings, ignored listing per whitelist
Feb 26 11:19:48.369: INFO: namespace e2e-tests-pods-cgf6f deletion completed in 24.319786832s

• [SLOW TEST:24.820 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:19:48.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:19:58.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-l9rtt" for this suite.
Feb 26 11:20:44.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:20:44.982: INFO: namespace: e2e-tests-kubelet-test-l9rtt, resource: bindings, ignored listing per whitelist
Feb 26 11:20:45.020: INFO: namespace e2e-tests-kubelet-test-l9rtt deletion completed in 46.197829568s

• [SLOW TEST:56.650 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:20:45.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 26 11:23:49.985: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:23:50.143: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:23:52.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:23:52.658: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:23:54.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:23:54.159: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:23:56.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:23:56.164: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:23:58.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:23:58.172: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:00.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:00.164: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:02.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:02.179: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:04.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:04.170: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:06.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:06.163: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:08.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:08.177: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:10.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:10.163: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:12.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:12.177: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:14.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:14.158: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:16.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:16.172: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:18.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:18.163: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:20.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:20.160: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:22.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:22.860: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:24.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:24.163: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:26.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:26.160: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:28.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:28.303: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:30.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:30.158: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:32.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:32.204: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:34.145: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:34.167: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:36.145: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:36.167: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:38.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:38.169: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:40.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:40.223: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:42.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:42.196: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:44.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:44.198: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:46.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:46.202: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:48.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:48.161: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:50.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:50.170: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:52.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:52.171: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:54.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:54.155: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:56.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:56.166: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:24:58.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:24:58.163: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:00.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:00.161: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:02.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:02.167: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:04.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:04.157: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:06.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:06.166: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:08.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:08.161: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:10.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:10.160: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:12.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:12.158: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:14.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:14.162: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:16.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:16.180: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:18.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:18.153: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:20.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:20.162: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:22.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:22.158: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:24.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:24.162: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:26.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:26.181: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:28.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:28.165: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:30.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:30.161: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:32.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:32.202: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:34.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:34.156: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:36.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:36.170: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:38.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:38.161: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:40.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:40.159: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:42.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:42.173: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:44.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:44.158: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:46.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:46.173: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:48.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:48.164: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:50.145: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:50.166: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:52.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:52.163: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:54.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:54.159: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:56.145: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:56.170: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:25:58.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:25:58.166: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:00.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:00.157: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:02.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:02.161: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:04.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:04.165: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:06.145: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:06.174: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:08.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:08.163: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:10.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:10.161: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:12.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:12.164: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:14.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:14.159: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:16.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:16.209: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:18.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:18.165: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:20.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:20.156: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:22.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:22.157: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:24.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:24.228: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:26.145: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:26.175: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:28.145: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:28.171: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:30.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:30.162: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:32.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:32.165: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:34.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:34.190: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:36.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:36.202: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:38.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:38.161: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:40.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:40.191: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:42.145: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:42.198: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:44.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:44.159: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:46.145: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:46.216: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:48.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:48.158: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:50.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:50.161: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 26 11:26:50.162: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 26 11:26:50.272: INFO: Pod pod-with-poststart-exec-hook still exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
STEP: Collecting events from namespace "e2e-tests-container-lifecycle-hook-5wdg6".
STEP: Found 12 events.
Feb 26 11:26:50.413: INFO: At 2020-02-26 11:20:45 +0000 UTC - event for pod-handle-http-request: {default-scheduler } Scheduled: Successfully assigned e2e-tests-container-lifecycle-hook-5wdg6/pod-handle-http-request to hunter-server-hu5at5svl7ps
Feb 26 11:26:50.413: INFO: At 2020-02-26 11:20:50 +0000 UTC - event for pod-handle-http-request: {kubelet hunter-server-hu5at5svl7ps} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/netexec:1.1" already present on machine
Feb 26 11:26:50.413: INFO: At 2020-02-26 11:20:52 +0000 UTC - event for pod-handle-http-request: {kubelet hunter-server-hu5at5svl7ps} Created: Created container
Feb 26 11:26:50.413: INFO: At 2020-02-26 11:20:53 +0000 UTC - event for pod-handle-http-request: {kubelet hunter-server-hu5at5svl7ps} Started: Started container
Feb 26 11:26:50.413: INFO: At 2020-02-26 11:20:55 +0000 UTC - event for pod-with-poststart-exec-hook: {default-scheduler } Scheduled: Successfully assigned e2e-tests-container-lifecycle-hook-5wdg6/pod-with-poststart-exec-hook to hunter-server-hu5at5svl7ps
Feb 26 11:26:50.413: INFO: At 2020-02-26 11:20:59 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/hostexec:1.1" already present on machine
Feb 26 11:26:50.413: INFO: At 2020-02-26 11:21:02 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} Created: Created container
Feb 26 11:26:50.413: INFO: At 2020-02-26 11:21:02 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} Started: Started container
Feb 26 11:26:50.413: INFO: At 2020-02-26 11:23:13 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} FailedPostStartHook: Exec lifecycle hook ([sh -c curl http://10.32.0.4:8080/echo?msg=poststart]) for Container "pod-with-poststart-exec-hook" in Pod "pod-with-poststart-exec-hook_e2e-tests-container-lifecycle-hook-5wdg6(0e0c621a-588a-11ea-a994-fa163e34d433)" failed - error: command 'sh -c curl http://10.32.0.4:8080/echo?msg=poststart' exited with 7:   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:35 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:36 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:37 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:38 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:39 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:40 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:41 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:42 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:43 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:44 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:45 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:46 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:47 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:48 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:49 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:50 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:51 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:52 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:53 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:54 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:55 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:56 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:57 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:58 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:59 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:00 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:10 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:14 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:20 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:34 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:35 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:36 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:37 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:38 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:39 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:40 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:41 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:42 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:43 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:44 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:45 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:46 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:47 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:48 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:49 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:50 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:51 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:52 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:53 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:54 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:55 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:56 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:57 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:58 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:59 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:00 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:09 --:--:--     0curl: (7) Failed to connect to 10.32.0.4 port 8080: Operation timed out
, message: "  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n\r  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:35 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:36 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:37 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:38 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:39 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:40 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:41 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:42 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:43 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:44 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:45 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:46 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:47 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:48 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:49 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:50 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:51 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:52 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:53 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:54 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:55 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:56 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:57 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:58 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:59 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:00 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:06 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:07 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:08 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:09 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:10 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:11 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:12 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:13 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:14 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:15 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:16 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:17 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:18 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:19 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:20 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:21 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:22 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:23 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:24 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:25 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:26 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:27 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:28 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:29 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:30 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:31 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:32 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:33 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:34 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:35 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:36 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:37 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:38 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:39 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:40 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:41 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:42 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:43 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:44 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:45 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:46 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:47 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:48 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:49 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:50 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:51 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:52 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:53 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:54 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:55 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:56 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:57 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:58 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:59 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:00 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:06 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:07 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:08 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:09 --:--:--     0curl: (7) Failed to connect to 10.32.0.4 port 8080: Operation timed out\n"
Feb 26 11:26:50.413: INFO: At 2020-02-26 11:23:45 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} Killing: Killing container with id docker://pod-with-poststart-exec-hook:FailedPostStartHook
Feb 26 11:26:50.414: INFO: At 2020-02-26 11:23:50 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} SandboxChanged: Pod sandbox changed, it will be killed and re-created.
Feb 26 11:26:50.414: INFO: At 2020-02-26 11:24:23 +0000 UTC - event for pod-with-poststart-exec-hook: {kubelet hunter-server-hu5at5svl7ps} Killing: Killing container with id docker://pod-with-poststart-exec-hook:Need to kill Pod
Feb 26 11:26:50.445: INFO: POD                                                 NODE                        PHASE    GRACE  CONDITIONS
Feb 26 11:26:50.445: INFO: pod-handle-http-request                             hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:20:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:20:54 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:20:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:20:45 +0000 UTC  }]
Feb 26 11:26:50.445: INFO: pod-with-poststart-exec-hook                        hunter-server-hu5at5svl7ps  Running  15s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:20:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:23:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:23:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:20:55 +0000 UTC  }]
Feb 26 11:26:50.445: INFO: coredns-54ff9cd656-79kxx                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Feb 26 11:26:50.445: INFO: coredns-54ff9cd656-bmkk4                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Feb 26 11:26:50.445: INFO: etcd-hunter-server-hu5at5svl7ps                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Feb 26 11:26:50.445: INFO: kube-apiserver-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Feb 26 11:26:50.445: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 17:49:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-16 17:49:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Feb 26 11:26:50.445: INFO: kube-proxy-bqnnz                                    hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:22 +0000 UTC  }]
Feb 26 11:26:50.445: INFO: kube-scheduler-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:17:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:17:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Feb 26 11:26:50.445: INFO: weave-net-tqwf2                                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-25 12:21:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-25 12:21:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  }]
Feb 26 11:26:50.445: INFO: 
Feb 26 11:26:50.455: INFO: 
Logging node info for node hunter-server-hu5at5svl7ps
Feb 26 11:26:50.461: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-server-hu5at5svl7ps,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-server-hu5at5svl7ps,UID:79f3887d-b692-11e9-a994-fa163e34d433,ResourceVersion:22971675,Generation:0,CreationTimestamp:2019-08-04 08:33:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-server-hu5at5svl7ps,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:33:41 +0000 UTC 2019-08-04 08:33:41 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-02-26 11:26:42 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-02-26 11:26:42 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-02-26 11:26:42 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-02-26 11:26:42 +0000 UTC 2019-08-04 08:33:44 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.1.240} {Hostname hunter-server-hu5at5svl7ps}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09742db8afaa4010be44cec974ef8dd2,SystemUUID:09742DB8-AFAA-4010-BE44-CEC974EF8DD2,BootID:e5092afb-2b29-4458-9662-9eee6c0a1f90,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.13.8,KubeProxyVersion:v1.13.8,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 k8s.gcr.io/etcd:3.2.24] 219655340} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[k8s.gcr.io/kube-apiserver@sha256:782fb3e5e34a3025e5c2fc92d5a73fc5eb5223fbd1760a551f2d02e1b484c899 k8s.gcr.io/kube-apiserver:v1.13.8] 181093118} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[k8s.gcr.io/kube-controller-manager@sha256:46889a90fff5324ad813c1024d0b7713a5529117570e3611657a0acfb58c8f43 k8s.gcr.io/kube-controller-manager:v1.13.8] 146353566} {[nginx@sha256:70821e443be75ea38bdf52a974fd2271babd5875b2b1964f05025981c75a6717] 126698067} {[nginx@sha256:ad5552c786f128e389a0263104ae39f3d3c7895579d45ae716f528185b36bc6f nginx:latest] 126698063} {[nginx@sha256:662b1a542362596b094b0b3fa30a8528445b75aed9f2d009f72401a0f8870c1f nginx@sha256:9916837e6b165e967e2beb5a586b1c980084d08eb3b3d7f79178a0c79426d880] 126346569} {[nginx@sha256:8aa7f6a9585d908a63e5e418dc5d14ae7467d2e36e1ab4f0d8f9d059a3d071ce] 126324348} {[nginx@sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2] 126323778} {[nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566 nginx@sha256:73113849b52b099e447eabb83a2722635562edc798f5b86bdf853faa0a49ec70] 126323486} {[nginx@sha256:922c815aa4df050d4df476e92daed4231f466acc8ee90e0e774951b0fd7195a4] 126215561} {[nginx@sha256:77ebc94e0cec30b20f9056bac1066b09fbdc049401b71850922c63fc0cc1762e] 125993293} {[nginx@sha256:9688d0dae8812dd2437947b756393eb0779487e361aa2ffbc3a529dca61f102c] 125976833} {[nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1] 125972845} {[nginx@sha256:1a8935aae56694cee3090d39df51b4e7fcbfe6877df24a4c5c0782dfeccc97e1 nginx@sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9 nginx@sha256:a8517b1d89209c88eeb48709bc06d706c261062813720a352a8e4f8d96635d9d] 125958368} {[nginx@sha256:5411d8897c3da841a1f45f895b43ad4526eb62d3393c3287124a56be49962d41] 125850912} {[nginx@sha256:eb3320e2f9ca409b7c0aa71aea3cf7ce7d018f03a372564dbdb023646958770b] 125850346} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:c27502f9ab958f59f95bda6a4ffd266e3ca42a75aae641db4aac7e93dd383b6e k8s.gcr.io/kube-proxy:v1.13.8] 80245404} {[k8s.gcr.io/kube-scheduler@sha256:fdcc2d056ba5937f66301b9071b2c322fad53254e6ddf277592d99f267e5745f k8s.gcr.io/kube-scheduler:v1.13.8] 79601406} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51 k8s.gcr.io/coredns:1.2.6] 40017418} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Feb 26 11:26:50.462: INFO: 
Logging kubelet events for node hunter-server-hu5at5svl7ps
Feb 26 11:26:50.469: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps
Feb 26 11:26:50.494: INFO: kube-scheduler-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Feb 26 11:26:50.494: INFO: coredns-54ff9cd656-79kxx started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Feb 26 11:26:50.494: INFO: 	Container coredns ready: true, restart count 0
Feb 26 11:26:50.494: INFO: kube-proxy-bqnnz started at 2019-08-04 08:33:23 +0000 UTC (0+1 container statuses recorded)
Feb 26 11:26:50.494: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 26 11:26:50.494: INFO: pod-with-poststart-exec-hook started at 2020-02-26 11:20:55 +0000 UTC (0+1 container statuses recorded)
Feb 26 11:26:50.494: INFO: 	Container pod-with-poststart-exec-hook ready: true, restart count 1
Feb 26 11:26:50.494: INFO: etcd-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Feb 26 11:26:50.494: INFO: weave-net-tqwf2 started at 2019-08-04 08:33:23 +0000 UTC (0+2 container statuses recorded)
Feb 26 11:26:50.494: INFO: 	Container weave ready: true, restart count 0
Feb 26 11:26:50.494: INFO: 	Container weave-npc ready: true, restart count 0
Feb 26 11:26:50.494: INFO: pod-handle-http-request started at 2020-02-26 11:20:45 +0000 UTC (0+1 container statuses recorded)
Feb 26 11:26:50.494: INFO: 	Container pod-handle-http-request ready: true, restart count 0
Feb 26 11:26:50.494: INFO: coredns-54ff9cd656-bmkk4 started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Feb 26 11:26:50.494: INFO: 	Container coredns ready: true, restart count 0
Feb 26 11:26:50.494: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Feb 26 11:26:50.494: INFO: kube-apiserver-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
W0226 11:26:50.500889       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 26 11:26:50.579: INFO: 
Latency metrics for node hunter-server-hu5at5svl7ps
Feb 26 11:26:50.579: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:32.051554s}
Feb 26 11:26:50.579: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:12.025323s}
Feb 26 11:26:50.579: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.5 Latency:12.009833s}
Feb 26 11:26:50.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-5wdg6" for this suite.
Feb 26 11:27:32.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:27:32.965: INFO: namespace: e2e-tests-container-lifecycle-hook-5wdg6, resource: bindings, ignored listing per whitelist
Feb 26 11:27:33.097: INFO: namespace e2e-tests-container-lifecycle-hook-5wdg6 deletion completed in 42.497144511s

• Failure [408.077 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    wait for pod "pod-with-poststart-exec-hook" to disappear
    Expected success, but got an error:
        <*errors.errorString | 0xc0000a18b0>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:175
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:27:33.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 26 11:27:33.442: INFO: Waiting up to 5m0s for pod "pod-fb584450-588a-11ea-8134-0242ac110008" in namespace "e2e-tests-emptydir-nbctt" to be "success or failure"
Feb 26 11:27:33.456: INFO: Pod "pod-fb584450-588a-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.287401ms
Feb 26 11:27:35.480: INFO: Pod "pod-fb584450-588a-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037553867s
Feb 26 11:27:37.495: INFO: Pod "pod-fb584450-588a-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05260868s
Feb 26 11:27:39.961: INFO: Pod "pod-fb584450-588a-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.518351487s
Feb 26 11:27:41.985: INFO: Pod "pod-fb584450-588a-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.542755626s
Feb 26 11:27:44.049: INFO: Pod "pod-fb584450-588a-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.606298495s
STEP: Saw pod success
Feb 26 11:27:44.049: INFO: Pod "pod-fb584450-588a-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 11:27:44.099: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-fb584450-588a-11ea-8134-0242ac110008 container test-container: 
STEP: delete the pod
Feb 26 11:27:44.385: INFO: Waiting for pod pod-fb584450-588a-11ea-8134-0242ac110008 to disappear
Feb 26 11:27:44.395: INFO: Pod pod-fb584450-588a-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:27:44.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-nbctt" for this suite.
Feb 26 11:27:50.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:27:50.606: INFO: namespace: e2e-tests-emptydir-nbctt, resource: bindings, ignored listing per whitelist
Feb 26 11:27:50.693: INFO: namespace e2e-tests-emptydir-nbctt deletion completed in 6.29198948s

• [SLOW TEST:17.594 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:27:50.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-8xtvh
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-8xtvh
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-8xtvh
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-8xtvh
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-8xtvh
Feb 26 11:28:02.991: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-8xtvh, name: ss-0, uid: 0ce33d26-588b-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Feb 26 11:28:02.993: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-8xtvh, name: ss-0, uid: 0ce33d26-588b-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb 26 11:28:03.032: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-8xtvh, name: ss-0, uid: 0ce33d26-588b-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb 26 11:28:03.053: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-8xtvh
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-8xtvh
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-8xtvh and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 26 11:28:15.812: INFO: Deleting all statefulset in ns e2e-tests-statefulset-8xtvh
Feb 26 11:28:15.823: INFO: Scaling statefulset ss to 0
Feb 26 11:28:25.931: INFO: Waiting for statefulset status.replicas updated to 0
Feb 26 11:28:25.939: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:28:25.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-8xtvh" for this suite.
Feb 26 11:28:32.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:28:32.275: INFO: namespace: e2e-tests-statefulset-8xtvh, resource: bindings, ignored listing per whitelist
Feb 26 11:28:32.589: INFO: namespace e2e-tests-statefulset-8xtvh deletion completed in 6.544710082s

• [SLOW TEST:41.896 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:28:32.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 26 11:28:32.808: INFO: Waiting up to 5m0s for pod "pod-1eb3cba1-588b-11ea-8134-0242ac110008" in namespace "e2e-tests-emptydir-8wjbp" to be "success or failure"
Feb 26 11:28:32.891: INFO: Pod "pod-1eb3cba1-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 82.007839ms
Feb 26 11:28:34.917: INFO: Pod "pod-1eb3cba1-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108374777s
Feb 26 11:28:36.934: INFO: Pod "pod-1eb3cba1-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124987362s
Feb 26 11:28:39.297: INFO: Pod "pod-1eb3cba1-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.488558197s
Feb 26 11:28:41.332: INFO: Pod "pod-1eb3cba1-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.523863774s
Feb 26 11:28:43.360: INFO: Pod "pod-1eb3cba1-588b-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.551066632s
STEP: Saw pod success
Feb 26 11:28:43.360: INFO: Pod "pod-1eb3cba1-588b-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 11:28:43.371: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1eb3cba1-588b-11ea-8134-0242ac110008 container test-container: 
STEP: delete the pod
Feb 26 11:28:43.702: INFO: Waiting for pod pod-1eb3cba1-588b-11ea-8134-0242ac110008 to disappear
Feb 26 11:28:43.712: INFO: Pod pod-1eb3cba1-588b-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:28:43.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8wjbp" for this suite.
Feb 26 11:28:49.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:28:50.024: INFO: namespace: e2e-tests-emptydir-8wjbp, resource: bindings, ignored listing per whitelist
Feb 26 11:28:50.132: INFO: namespace e2e-tests-emptydir-8wjbp deletion completed in 6.412747842s

• [SLOW TEST:17.541 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:28:50.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 26 11:28:50.310: INFO: Waiting up to 5m0s for pod "pod-2931114c-588b-11ea-8134-0242ac110008" in namespace "e2e-tests-emptydir-b9tqb" to be "success or failure"
Feb 26 11:28:50.341: INFO: Pod "pod-2931114c-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 30.395509ms
Feb 26 11:28:52.409: INFO: Pod "pod-2931114c-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098310255s
Feb 26 11:28:54.430: INFO: Pod "pod-2931114c-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120201512s
Feb 26 11:28:57.485: INFO: Pod "pod-2931114c-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.17455387s
Feb 26 11:28:59.510: INFO: Pod "pod-2931114c-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.199624864s
Feb 26 11:29:01.536: INFO: Pod "pod-2931114c-588b-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.226123588s
STEP: Saw pod success
Feb 26 11:29:01.537: INFO: Pod "pod-2931114c-588b-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 11:29:01.551: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-2931114c-588b-11ea-8134-0242ac110008 container test-container: 
STEP: delete the pod
Feb 26 11:29:01.952: INFO: Waiting for pod pod-2931114c-588b-11ea-8134-0242ac110008 to disappear
Feb 26 11:29:01.964: INFO: Pod pod-2931114c-588b-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:29:01.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-b9tqb" for this suite.
Feb 26 11:29:08.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:29:08.384: INFO: namespace: e2e-tests-emptydir-b9tqb, resource: bindings, ignored listing per whitelist
Feb 26 11:29:08.635: INFO: namespace e2e-tests-emptydir-b9tqb deletion completed in 6.652183521s

• [SLOW TEST:18.502 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:29:08.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 26 11:29:08.923: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:29:28.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-98q2b" for this suite.
Feb 26 11:29:36.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:29:36.899: INFO: namespace: e2e-tests-init-container-98q2b, resource: bindings, ignored listing per whitelist
Feb 26 11:29:37.073: INFO: namespace e2e-tests-init-container-98q2b deletion completed in 8.336222431s

• [SLOW TEST:28.438 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:29:37.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-4529dc60-588b-11ea-8134-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 26 11:29:37.386: INFO: Waiting up to 5m0s for pod "pod-secrets-453da89a-588b-11ea-8134-0242ac110008" in namespace "e2e-tests-secrets-46qxk" to be "success or failure"
Feb 26 11:29:37.521: INFO: Pod "pod-secrets-453da89a-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 135.320476ms
Feb 26 11:29:39.840: INFO: Pod "pod-secrets-453da89a-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.45367001s
Feb 26 11:29:41.857: INFO: Pod "pod-secrets-453da89a-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471467035s
Feb 26 11:29:43.921: INFO: Pod "pod-secrets-453da89a-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.534867103s
Feb 26 11:29:46.634: INFO: Pod "pod-secrets-453da89a-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.248458002s
Feb 26 11:29:48.650: INFO: Pod "pod-secrets-453da89a-588b-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.264187847s
STEP: Saw pod success
Feb 26 11:29:48.650: INFO: Pod "pod-secrets-453da89a-588b-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 11:29:48.654: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-453da89a-588b-11ea-8134-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 26 11:29:48.995: INFO: Waiting for pod pod-secrets-453da89a-588b-11ea-8134-0242ac110008 to disappear
Feb 26 11:29:49.008: INFO: Pod pod-secrets-453da89a-588b-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:29:49.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-46qxk" for this suite.
Feb 26 11:29:55.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:29:55.115: INFO: namespace: e2e-tests-secrets-46qxk, resource: bindings, ignored listing per whitelist
Feb 26 11:29:55.345: INFO: namespace e2e-tests-secrets-46qxk deletion completed in 6.322519675s
STEP: Destroying namespace "e2e-tests-secret-namespace-hrp5n" for this suite.
Feb 26 11:30:01.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:30:01.465: INFO: namespace: e2e-tests-secret-namespace-hrp5n, resource: bindings, ignored listing per whitelist
Feb 26 11:30:01.589: INFO: namespace e2e-tests-secret-namespace-hrp5n deletion completed in 6.243201611s

• [SLOW TEST:24.515 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:30:01.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-53e22ea2-588b-11ea-8134-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 26 11:30:01.943: INFO: Waiting up to 5m0s for pod "pod-secrets-53e34651-588b-11ea-8134-0242ac110008" in namespace "e2e-tests-secrets-ph6j5" to be "success or failure"
Feb 26 11:30:01.962: INFO: Pod "pod-secrets-53e34651-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 18.214199ms
Feb 26 11:30:04.304: INFO: Pod "pod-secrets-53e34651-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.360767255s
Feb 26 11:30:06.345: INFO: Pod "pod-secrets-53e34651-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.401546497s
Feb 26 11:30:08.767: INFO: Pod "pod-secrets-53e34651-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.823722449s
Feb 26 11:30:10.817: INFO: Pod "pod-secrets-53e34651-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.87362641s
Feb 26 11:30:12.849: INFO: Pod "pod-secrets-53e34651-588b-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.905133053s
STEP: Saw pod success
Feb 26 11:30:12.849: INFO: Pod "pod-secrets-53e34651-588b-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 11:30:12.888: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-53e34651-588b-11ea-8134-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 26 11:30:12.980: INFO: Waiting for pod pod-secrets-53e34651-588b-11ea-8134-0242ac110008 to disappear
Feb 26 11:30:13.040: INFO: Pod pod-secrets-53e34651-588b-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:30:13.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-ph6j5" for this suite.
Feb 26 11:30:19.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:30:19.169: INFO: namespace: e2e-tests-secrets-ph6j5, resource: bindings, ignored listing per whitelist
Feb 26 11:30:19.294: INFO: namespace e2e-tests-secrets-ph6j5 deletion completed in 6.239929108s

• [SLOW TEST:17.704 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:30:19.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 26 11:30:19.600: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5e67478a-588b-11ea-8134-0242ac110008" in namespace "e2e-tests-downward-api-ztg4x" to be "success or failure"
Feb 26 11:30:19.699: INFO: Pod "downwardapi-volume-5e67478a-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 98.963714ms
Feb 26 11:30:22.113: INFO: Pod "downwardapi-volume-5e67478a-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.513258374s
Feb 26 11:30:24.150: INFO: Pod "downwardapi-volume-5e67478a-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.55037701s
Feb 26 11:30:27.059: INFO: Pod "downwardapi-volume-5e67478a-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.458685985s
Feb 26 11:30:29.134: INFO: Pod "downwardapi-volume-5e67478a-588b-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.534504673s
Feb 26 11:30:31.146: INFO: Pod "downwardapi-volume-5e67478a-588b-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.546406758s
STEP: Saw pod success
Feb 26 11:30:31.146: INFO: Pod "downwardapi-volume-5e67478a-588b-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 11:30:31.154: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5e67478a-588b-11ea-8134-0242ac110008 container client-container: 
STEP: delete the pod
Feb 26 11:30:31.644: INFO: Waiting for pod downwardapi-volume-5e67478a-588b-11ea-8134-0242ac110008 to disappear
Feb 26 11:30:32.159: INFO: Pod downwardapi-volume-5e67478a-588b-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:30:32.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ztg4x" for this suite.
Feb 26 11:30:38.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:30:38.501: INFO: namespace: e2e-tests-downward-api-ztg4x, resource: bindings, ignored listing per whitelist
Feb 26 11:30:38.618: INFO: namespace e2e-tests-downward-api-ztg4x deletion completed in 6.417243214s

• [SLOW TEST:19.324 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:30:38.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb 26 11:30:38.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:30:40.975: INFO: stderr: ""
Feb 26 11:30:40.975: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 26 11:30:40.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:30:41.202: INFO: stderr: ""
Feb 26 11:30:41.202: INFO: stdout: "update-demo-nautilus-g467x update-demo-nautilus-vzm7b "
Feb 26 11:30:41.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g467x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:30:41.357: INFO: stderr: ""
Feb 26 11:30:41.357: INFO: stdout: ""
Feb 26 11:30:41.357: INFO: update-demo-nautilus-g467x is created but not running
Feb 26 11:30:46.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:30:46.511: INFO: stderr: ""
Feb 26 11:30:46.511: INFO: stdout: "update-demo-nautilus-g467x update-demo-nautilus-vzm7b "
Feb 26 11:30:46.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g467x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:30:46.643: INFO: stderr: ""
Feb 26 11:30:46.644: INFO: stdout: ""
Feb 26 11:30:46.644: INFO: update-demo-nautilus-g467x is created but not running
Feb 26 11:30:51.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:30:51.855: INFO: stderr: ""
Feb 26 11:30:51.855: INFO: stdout: "update-demo-nautilus-g467x update-demo-nautilus-vzm7b "
Feb 26 11:30:51.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g467x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:30:51.990: INFO: stderr: ""
Feb 26 11:30:51.990: INFO: stdout: ""
Feb 26 11:30:51.990: INFO: update-demo-nautilus-g467x is created but not running
Feb 26 11:30:56.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:30:57.183: INFO: stderr: ""
Feb 26 11:30:57.183: INFO: stdout: "update-demo-nautilus-g467x update-demo-nautilus-vzm7b "
Feb 26 11:30:57.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g467x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:30:57.280: INFO: stderr: ""
Feb 26 11:30:57.280: INFO: stdout: "true"
Feb 26 11:30:57.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g467x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:30:57.401: INFO: stderr: ""
Feb 26 11:30:57.401: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 26 11:30:57.401: INFO: validating pod update-demo-nautilus-g467x
Feb 26 11:30:57.479: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 26 11:30:57.480: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 26 11:30:57.480: INFO: update-demo-nautilus-g467x is verified up and running
Feb 26 11:30:57.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzm7b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:30:57.620: INFO: stderr: ""
Feb 26 11:30:57.621: INFO: stdout: "true"
Feb 26 11:30:57.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzm7b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:30:57.711: INFO: stderr: ""
Feb 26 11:30:57.711: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 26 11:30:57.711: INFO: validating pod update-demo-nautilus-vzm7b
Feb 26 11:30:57.722: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 26 11:30:57.722: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 26 11:30:57.722: INFO: update-demo-nautilus-vzm7b is verified up and running
STEP: scaling down the replication controller
Feb 26 11:30:57.725: INFO: scanned /root for discovery docs: 
Feb 26 11:30:57.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:31:01.184: INFO: stderr: ""
Feb 26 11:31:01.184: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 26 11:31:01.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:31:01.319: INFO: stderr: ""
Feb 26 11:31:01.319: INFO: stdout: "update-demo-nautilus-g467x update-demo-nautilus-vzm7b "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 26 11:31:06.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:31:06.557: INFO: stderr: ""
Feb 26 11:31:06.557: INFO: stdout: "update-demo-nautilus-g467x update-demo-nautilus-vzm7b "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 26 11:31:11.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:31:11.763: INFO: stderr: ""
Feb 26 11:31:11.763: INFO: stdout: "update-demo-nautilus-g467x update-demo-nautilus-vzm7b "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 26 11:31:16.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:31:16.915: INFO: stderr: ""
Feb 26 11:31:16.915: INFO: stdout: "update-demo-nautilus-vzm7b "
Feb 26 11:31:16.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzm7b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:31:17.046: INFO: stderr: ""
Feb 26 11:31:17.046: INFO: stdout: "true"
Feb 26 11:31:17.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzm7b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:31:17.168: INFO: stderr: ""
Feb 26 11:31:17.168: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 26 11:31:17.169: INFO: validating pod update-demo-nautilus-vzm7b
Feb 26 11:31:17.181: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 26 11:31:17.181: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 26 11:31:17.181: INFO: update-demo-nautilus-vzm7b is verified up and running
STEP: scaling up the replication controller
Feb 26 11:31:17.183: INFO: scanned /root for discovery docs: 
Feb 26 11:31:17.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:31:18.685: INFO: stderr: ""
Feb 26 11:31:18.685: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 26 11:31:18.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:31:18.803: INFO: stderr: ""
Feb 26 11:31:18.804: INFO: stdout: "update-demo-nautilus-s25x6 update-demo-nautilus-vzm7b "
Feb 26 11:31:18.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s25x6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:31:18.901: INFO: stderr: ""
Feb 26 11:31:18.901: INFO: stdout: ""
Feb 26 11:31:18.901: INFO: update-demo-nautilus-s25x6 is created but not running
Feb 26 11:31:23.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:31:24.280: INFO: stderr: ""
Feb 26 11:31:24.280: INFO: stdout: "update-demo-nautilus-s25x6 update-demo-nautilus-vzm7b "
Feb 26 11:31:24.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s25x6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:31:24.420: INFO: stderr: ""
Feb 26 11:31:24.420: INFO: stdout: ""
Feb 26 11:31:24.420: INFO: update-demo-nautilus-s25x6 is created but not running
Feb 26 11:31:29.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:31:29.582: INFO: stderr: ""
Feb 26 11:31:29.582: INFO: stdout: "update-demo-nautilus-s25x6 update-demo-nautilus-vzm7b "
Feb 26 11:31:29.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s25x6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:31:29.701: INFO: stderr: ""
Feb 26 11:31:29.701: INFO: stdout: "true"
Feb 26 11:31:29.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s25x6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:31:29.816: INFO: stderr: ""
Feb 26 11:31:29.816: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 26 11:31:29.816: INFO: validating pod update-demo-nautilus-s25x6
Feb 26 11:31:29.837: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 26 11:31:29.837: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 26 11:31:29.837: INFO: update-demo-nautilus-s25x6 is verified up and running
Feb 26 11:31:29.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzm7b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:31:29.984: INFO: stderr: ""
Feb 26 11:31:29.984: INFO: stdout: "true"
Feb 26 11:31:29.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzm7b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:31:30.104: INFO: stderr: ""
Feb 26 11:31:30.104: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 26 11:31:30.104: INFO: validating pod update-demo-nautilus-vzm7b
Feb 26 11:31:30.114: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 26 11:31:30.115: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 26 11:31:30.115: INFO: update-demo-nautilus-vzm7b is verified up and running
STEP: using delete to clean up resources
Feb 26 11:31:30.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:31:30.231: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 26 11:31:30.231: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 26 11:31:30.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-j94vp'
Feb 26 11:31:30.431: INFO: stderr: "No resources found.\n"
Feb 26 11:31:30.431: INFO: stdout: ""
Feb 26 11:31:30.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-j94vp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 26 11:31:30.549: INFO: stderr: ""
Feb 26 11:31:30.549: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:31:30.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-j94vp" for this suite.
Feb 26 11:31:54.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:31:54.789: INFO: namespace: e2e-tests-kubectl-j94vp, resource: bindings, ignored listing per whitelist
Feb 26 11:31:54.968: INFO: namespace e2e-tests-kubectl-j94vp deletion completed in 24.384815165s

• [SLOW TEST:76.348 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:31:54.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-b4lwp
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-b4lwp
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-b4lwp
Feb 26 11:31:55.314: INFO: Found 0 stateful pods, waiting for 1
Feb 26 11:32:05.334: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 26 11:32:05.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 26 11:32:06.076: INFO: stderr: "I0226 11:32:05.596408    1363 log.go:172] (0xc000138840) (0xc0007b0640) Create stream\nI0226 11:32:05.596712    1363 log.go:172] (0xc000138840) (0xc0007b0640) Stream added, broadcasting: 1\nI0226 11:32:05.602471    1363 log.go:172] (0xc000138840) Reply frame received for 1\nI0226 11:32:05.602511    1363 log.go:172] (0xc000138840) (0xc00067cc80) Create stream\nI0226 11:32:05.602521    1363 log.go:172] (0xc000138840) (0xc00067cc80) Stream added, broadcasting: 3\nI0226 11:32:05.604507    1363 log.go:172] (0xc000138840) Reply frame received for 3\nI0226 11:32:05.604533    1363 log.go:172] (0xc000138840) (0xc0007b06e0) Create stream\nI0226 11:32:05.604544    1363 log.go:172] (0xc000138840) (0xc0007b06e0) Stream added, broadcasting: 5\nI0226 11:32:05.605621    1363 log.go:172] (0xc000138840) Reply frame received for 5\nI0226 11:32:05.933255    1363 log.go:172] (0xc000138840) Data frame received for 3\nI0226 11:32:05.933306    1363 log.go:172] (0xc00067cc80) (3) Data frame handling\nI0226 11:32:05.933335    1363 log.go:172] (0xc00067cc80) (3) Data frame sent\nI0226 11:32:06.064207    1363 log.go:172] (0xc000138840) (0xc00067cc80) Stream removed, broadcasting: 3\nI0226 11:32:06.064875    1363 log.go:172] (0xc000138840) Data frame received for 1\nI0226 11:32:06.065010    1363 log.go:172] (0xc000138840) (0xc0007b06e0) Stream removed, broadcasting: 5\nI0226 11:32:06.065143    1363 log.go:172] (0xc0007b0640) (1) Data frame handling\nI0226 11:32:06.065211    1363 log.go:172] (0xc0007b0640) (1) Data frame sent\nI0226 11:32:06.065259    1363 log.go:172] (0xc000138840) (0xc0007b0640) Stream removed, broadcasting: 1\nI0226 11:32:06.065306    1363 log.go:172] (0xc000138840) Go away received\nI0226 11:32:06.065612    1363 log.go:172] (0xc000138840) (0xc0007b0640) Stream removed, broadcasting: 1\nI0226 11:32:06.065690    1363 log.go:172] (0xc000138840) (0xc00067cc80) Stream removed, broadcasting: 3\nI0226 11:32:06.065712    1363 log.go:172] (0xc000138840) (0xc0007b06e0) Stream removed, broadcasting: 5\n"
Feb 26 11:32:06.076: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 26 11:32:06.076: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 26 11:32:06.092: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 26 11:32:16.104: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 26 11:32:16.104: INFO: Waiting for statefulset status.replicas updated to 0
Feb 26 11:32:16.139: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998439s
Feb 26 11:32:17.154: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.987412706s
Feb 26 11:32:18.173: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.972503976s
Feb 26 11:32:19.202: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.953573128s
Feb 26 11:32:20.253: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.925160764s
Feb 26 11:32:21.273: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.874186817s
Feb 26 11:32:22.294: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.853535713s
Feb 26 11:32:23.317: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.832835758s
Feb 26 11:32:24.327: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.809898025s
Feb 26 11:32:25.350: INFO: Verifying statefulset ss doesn't scale past 1 for another 799.516051ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-b4lwp
Feb 26 11:32:26.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:32:27.832: INFO: stderr: "I0226 11:32:26.696460    1385 log.go:172] (0xc000702370) (0xc00072c640) Create stream\nI0226 11:32:26.697016    1385 log.go:172] (0xc000702370) (0xc00072c640) Stream added, broadcasting: 1\nI0226 11:32:26.705524    1385 log.go:172] (0xc000702370) Reply frame received for 1\nI0226 11:32:26.705612    1385 log.go:172] (0xc000702370) (0xc000652c80) Create stream\nI0226 11:32:26.705619    1385 log.go:172] (0xc000702370) (0xc000652c80) Stream added, broadcasting: 3\nI0226 11:32:26.707036    1385 log.go:172] (0xc000702370) Reply frame received for 3\nI0226 11:32:26.707106    1385 log.go:172] (0xc000702370) (0xc000718000) Create stream\nI0226 11:32:26.707137    1385 log.go:172] (0xc000702370) (0xc000718000) Stream added, broadcasting: 5\nI0226 11:32:26.707977    1385 log.go:172] (0xc000702370) Reply frame received for 5\nI0226 11:32:27.515264    1385 log.go:172] (0xc000702370) Data frame received for 3\nI0226 11:32:27.515334    1385 log.go:172] (0xc000652c80) (3) Data frame handling\nI0226 11:32:27.515354    1385 log.go:172] (0xc000652c80) (3) Data frame sent\nI0226 11:32:27.820571    1385 log.go:172] (0xc000702370) (0xc000652c80) Stream removed, broadcasting: 3\nI0226 11:32:27.820921    1385 log.go:172] (0xc000702370) Data frame received for 1\nI0226 11:32:27.820939    1385 log.go:172] (0xc00072c640) (1) Data frame handling\nI0226 11:32:27.820960    1385 log.go:172] (0xc00072c640) (1) Data frame sent\nI0226 11:32:27.820971    1385 log.go:172] (0xc000702370) (0xc00072c640) Stream removed, broadcasting: 1\nI0226 11:32:27.821277    1385 log.go:172] (0xc000702370) (0xc000718000) Stream removed, broadcasting: 5\nI0226 11:32:27.821308    1385 log.go:172] (0xc000702370) (0xc00072c640) Stream removed, broadcasting: 1\nI0226 11:32:27.821318    1385 log.go:172] (0xc000702370) (0xc000652c80) Stream removed, broadcasting: 3\nI0226 11:32:27.821328    1385 log.go:172] (0xc000702370) (0xc000718000) Stream removed, broadcasting: 5\n"
Feb 26 11:32:27.832: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 26 11:32:27.832: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 26 11:32:27.998: INFO: Found 1 stateful pods, waiting for 3
Feb 26 11:32:38.581: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 11:32:38.582: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 11:32:38.582: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 26 11:32:48.015: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 11:32:48.015: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 11:32:48.015: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 26 11:32:48.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 26 11:32:48.607: INFO: stderr: "I0226 11:32:48.234169    1407 log.go:172] (0xc00015e630) (0xc00052f7c0) Create stream\nI0226 11:32:48.234405    1407 log.go:172] (0xc00015e630) (0xc00052f7c0) Stream added, broadcasting: 1\nI0226 11:32:48.241472    1407 log.go:172] (0xc00015e630) Reply frame received for 1\nI0226 11:32:48.241616    1407 log.go:172] (0xc00015e630) (0xc000602000) Create stream\nI0226 11:32:48.241649    1407 log.go:172] (0xc00015e630) (0xc000602000) Stream added, broadcasting: 3\nI0226 11:32:48.243202    1407 log.go:172] (0xc00015e630) Reply frame received for 3\nI0226 11:32:48.243230    1407 log.go:172] (0xc00015e630) (0xc00051a500) Create stream\nI0226 11:32:48.243242    1407 log.go:172] (0xc00015e630) (0xc00051a500) Stream added, broadcasting: 5\nI0226 11:32:48.244180    1407 log.go:172] (0xc00015e630) Reply frame received for 5\nI0226 11:32:48.374631    1407 log.go:172] (0xc00015e630) Data frame received for 3\nI0226 11:32:48.374803    1407 log.go:172] (0xc000602000) (3) Data frame handling\nI0226 11:32:48.374850    1407 log.go:172] (0xc000602000) (3) Data frame sent\nI0226 11:32:48.590497    1407 log.go:172] (0xc00015e630) Data frame received for 1\nI0226 11:32:48.590780    1407 log.go:172] (0xc00052f7c0) (1) Data frame handling\nI0226 11:32:48.590831    1407 log.go:172] (0xc00052f7c0) (1) Data frame sent\nI0226 11:32:48.590863    1407 log.go:172] (0xc00015e630) (0xc00052f7c0) Stream removed, broadcasting: 1\nI0226 11:32:48.591159    1407 log.go:172] (0xc00015e630) (0xc000602000) Stream removed, broadcasting: 3\nI0226 11:32:48.591913    1407 log.go:172] (0xc00015e630) (0xc00051a500) Stream removed, broadcasting: 5\nI0226 11:32:48.592141    1407 log.go:172] (0xc00015e630) (0xc00052f7c0) Stream removed, broadcasting: 1\nI0226 11:32:48.592307    1407 log.go:172] (0xc00015e630) Go away received\nI0226 11:32:48.592637    1407 log.go:172] (0xc00015e630) (0xc000602000) Stream removed, broadcasting: 3\nI0226 11:32:48.592701    1407 log.go:172] (0xc00015e630) (0xc00051a500) Stream removed, broadcasting: 5\n"
Feb 26 11:32:48.608: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 26 11:32:48.608: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 26 11:32:48.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 26 11:32:49.272: INFO: stderr: "I0226 11:32:48.940291    1429 log.go:172] (0xc000726210) (0xc0003f2b40) Create stream\nI0226 11:32:48.940497    1429 log.go:172] (0xc000726210) (0xc0003f2b40) Stream added, broadcasting: 1\nI0226 11:32:48.956207    1429 log.go:172] (0xc000726210) Reply frame received for 1\nI0226 11:32:48.956437    1429 log.go:172] (0xc000726210) (0xc0008ba500) Create stream\nI0226 11:32:48.956491    1429 log.go:172] (0xc000726210) (0xc0008ba500) Stream added, broadcasting: 3\nI0226 11:32:48.959888    1429 log.go:172] (0xc000726210) Reply frame received for 3\nI0226 11:32:48.960096    1429 log.go:172] (0xc000726210) (0xc0002d2000) Create stream\nI0226 11:32:48.960123    1429 log.go:172] (0xc000726210) (0xc0002d2000) Stream added, broadcasting: 5\nI0226 11:32:48.962968    1429 log.go:172] (0xc000726210) Reply frame received for 5\nI0226 11:32:49.127860    1429 log.go:172] (0xc000726210) Data frame received for 3\nI0226 11:32:49.127986    1429 log.go:172] (0xc0008ba500) (3) Data frame handling\nI0226 11:32:49.128046    1429 log.go:172] (0xc0008ba500) (3) Data frame sent\nI0226 11:32:49.261008    1429 log.go:172] (0xc000726210) Data frame received for 1\nI0226 11:32:49.261182    1429 log.go:172] (0xc000726210) (0xc0008ba500) Stream removed, broadcasting: 3\nI0226 11:32:49.261301    1429 log.go:172] (0xc0003f2b40) (1) Data frame handling\nI0226 11:32:49.261351    1429 log.go:172] (0xc0003f2b40) (1) Data frame sent\nI0226 11:32:49.261440    1429 log.go:172] (0xc000726210) (0xc0002d2000) Stream removed, broadcasting: 5\nI0226 11:32:49.261515    1429 log.go:172] (0xc000726210) (0xc0003f2b40) Stream removed, broadcasting: 1\nI0226 11:32:49.261551    1429 log.go:172] (0xc000726210) Go away received\nI0226 11:32:49.261958    1429 log.go:172] (0xc000726210) (0xc0003f2b40) Stream removed, broadcasting: 1\nI0226 11:32:49.261992    1429 log.go:172] (0xc000726210) (0xc0008ba500) Stream removed, broadcasting: 3\nI0226 11:32:49.262011    1429 log.go:172] (0xc000726210) (0xc0002d2000) Stream removed, broadcasting: 5\n"
Feb 26 11:32:49.273: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 26 11:32:49.273: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 26 11:32:49.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 26 11:32:49.708: INFO: stderr: "I0226 11:32:49.485181    1450 log.go:172] (0xc000726370) (0xc000653540) Create stream\nI0226 11:32:49.485351    1450 log.go:172] (0xc000726370) (0xc000653540) Stream added, broadcasting: 1\nI0226 11:32:49.489315    1450 log.go:172] (0xc000726370) Reply frame received for 1\nI0226 11:32:49.489375    1450 log.go:172] (0xc000726370) (0xc000674000) Create stream\nI0226 11:32:49.489384    1450 log.go:172] (0xc000726370) (0xc000674000) Stream added, broadcasting: 3\nI0226 11:32:49.490446    1450 log.go:172] (0xc000726370) Reply frame received for 3\nI0226 11:32:49.490495    1450 log.go:172] (0xc000726370) (0xc0006740a0) Create stream\nI0226 11:32:49.490584    1450 log.go:172] (0xc000726370) (0xc0006740a0) Stream added, broadcasting: 5\nI0226 11:32:49.492315    1450 log.go:172] (0xc000726370) Reply frame received for 5\nI0226 11:32:49.611242    1450 log.go:172] (0xc000726370) Data frame received for 3\nI0226 11:32:49.611278    1450 log.go:172] (0xc000674000) (3) Data frame handling\nI0226 11:32:49.611288    1450 log.go:172] (0xc000674000) (3) Data frame sent\nI0226 11:32:49.700347    1450 log.go:172] (0xc000726370) (0xc000674000) Stream removed, broadcasting: 3\nI0226 11:32:49.700726    1450 log.go:172] (0xc000726370) Data frame received for 1\nI0226 11:32:49.700846    1450 log.go:172] (0xc000726370) (0xc0006740a0) Stream removed, broadcasting: 5\nI0226 11:32:49.700925    1450 log.go:172] (0xc000653540) (1) Data frame handling\nI0226 11:32:49.700958    1450 log.go:172] (0xc000653540) (1) Data frame sent\nI0226 11:32:49.700978    1450 log.go:172] (0xc000726370) (0xc000653540) Stream removed, broadcasting: 1\nI0226 11:32:49.701020    1450 log.go:172] (0xc000726370) Go away received\nI0226 11:32:49.701241    1450 log.go:172] (0xc000726370) (0xc000653540) Stream removed, broadcasting: 1\nI0226 11:32:49.701268    1450 log.go:172] (0xc000726370) (0xc000674000) Stream removed, broadcasting: 3\nI0226 11:32:49.701281    1450 log.go:172] (0xc000726370) (0xc0006740a0) Stream removed, broadcasting: 5\n"
Feb 26 11:32:49.709: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 26 11:32:49.709: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 26 11:32:49.709: INFO: Waiting for statefulset status.replicas updated to 0
Feb 26 11:32:49.721: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb 26 11:32:59.757: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 26 11:32:59.757: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 26 11:32:59.757: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 26 11:32:59.790: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999543s
Feb 26 11:33:00.812: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989012092s
Feb 26 11:33:01.858: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.967308054s
Feb 26 11:33:02.885: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.920479359s
Feb 26 11:33:03.915: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.89356287s
Feb 26 11:33:04.946: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.86388997s
Feb 26 11:33:06.001: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.833414846s
Feb 26 11:33:07.036: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.777819774s
Feb 26 11:33:08.052: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.743369476s
Feb 26 11:33:09.081: INFO: Verifying statefulset ss doesn't scale past 3 for another 727.05046ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-b4lwp
Feb 26 11:33:10.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:33:11.035: INFO: stderr: "I0226 11:33:10.353166    1472 log.go:172] (0xc00085c210) (0xc00089b9a0) Create stream\nI0226 11:33:10.353443    1472 log.go:172] (0xc00085c210) (0xc00089b9a0) Stream added, broadcasting: 1\nI0226 11:33:10.363838    1472 log.go:172] (0xc00085c210) Reply frame received for 1\nI0226 11:33:10.364063    1472 log.go:172] (0xc00085c210) (0xc0002a8000) Create stream\nI0226 11:33:10.364129    1472 log.go:172] (0xc00085c210) (0xc0002a8000) Stream added, broadcasting: 3\nI0226 11:33:10.367065    1472 log.go:172] (0xc00085c210) Reply frame received for 3\nI0226 11:33:10.367182    1472 log.go:172] (0xc00085c210) (0xc000886000) Create stream\nI0226 11:33:10.367222    1472 log.go:172] (0xc00085c210) (0xc000886000) Stream added, broadcasting: 5\nI0226 11:33:10.369542    1472 log.go:172] (0xc00085c210) Reply frame received for 5\nI0226 11:33:10.550724    1472 log.go:172] (0xc00085c210) Data frame received for 3\nI0226 11:33:10.551332    1472 log.go:172] (0xc0002a8000) (3) Data frame handling\nI0226 11:33:10.551450    1472 log.go:172] (0xc0002a8000) (3) Data frame sent\nI0226 11:33:11.020390    1472 log.go:172] (0xc00085c210) (0xc0002a8000) Stream removed, broadcasting: 3\nI0226 11:33:11.020684    1472 log.go:172] (0xc00085c210) Data frame received for 1\nI0226 11:33:11.020707    1472 log.go:172] (0xc00089b9a0) (1) Data frame handling\nI0226 11:33:11.020727    1472 log.go:172] (0xc00089b9a0) (1) Data frame sent\nI0226 11:33:11.020831    1472 log.go:172] (0xc00085c210) (0xc00089b9a0) Stream removed, broadcasting: 1\nI0226 11:33:11.020975    1472 log.go:172] (0xc00085c210) (0xc000886000) Stream removed, broadcasting: 5\nI0226 11:33:11.021098    1472 log.go:172] (0xc00085c210) Go away received\nI0226 11:33:11.021315    1472 log.go:172] (0xc00085c210) (0xc00089b9a0) Stream removed, broadcasting: 1\nI0226 11:33:11.021334    1472 log.go:172] (0xc00085c210) (0xc0002a8000) Stream removed, broadcasting: 3\nI0226 11:33:11.021361    1472 log.go:172] (0xc00085c210) (0xc000886000) Stream removed, broadcasting: 5\n"
Feb 26 11:33:11.036: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 26 11:33:11.036: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 26 11:33:11.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:33:11.736: INFO: stderr: "I0226 11:33:11.443640    1494 log.go:172] (0xc000738370) (0xc0007c4640) Create stream\nI0226 11:33:11.443964    1494 log.go:172] (0xc000738370) (0xc0007c4640) Stream added, broadcasting: 1\nI0226 11:33:11.475877    1494 log.go:172] (0xc000738370) Reply frame received for 1\nI0226 11:33:11.475914    1494 log.go:172] (0xc000738370) (0xc000698f00) Create stream\nI0226 11:33:11.475942    1494 log.go:172] (0xc000738370) (0xc000698f00) Stream added, broadcasting: 3\nI0226 11:33:11.477311    1494 log.go:172] (0xc000738370) Reply frame received for 3\nI0226 11:33:11.477361    1494 log.go:172] (0xc000738370) (0xc000546000) Create stream\nI0226 11:33:11.477391    1494 log.go:172] (0xc000738370) (0xc000546000) Stream added, broadcasting: 5\nI0226 11:33:11.478390    1494 log.go:172] (0xc000738370) Reply frame received for 5\nI0226 11:33:11.596344    1494 log.go:172] (0xc000738370) Data frame received for 3\nI0226 11:33:11.596378    1494 log.go:172] (0xc000698f00) (3) Data frame handling\nI0226 11:33:11.596408    1494 log.go:172] (0xc000698f00) (3) Data frame sent\nI0226 11:33:11.729968    1494 log.go:172] (0xc000738370) Data frame received for 1\nI0226 11:33:11.730113    1494 log.go:172] (0xc000738370) (0xc000546000) Stream removed, broadcasting: 5\nI0226 11:33:11.730171    1494 log.go:172] (0xc0007c4640) (1) Data frame handling\nI0226 11:33:11.730215    1494 log.go:172] (0xc0007c4640) (1) Data frame sent\nI0226 11:33:11.730323    1494 log.go:172] (0xc000738370) (0xc000698f00) Stream removed, broadcasting: 3\nI0226 11:33:11.730366    1494 log.go:172] (0xc000738370) (0xc0007c4640) Stream removed, broadcasting: 1\nI0226 11:33:11.730393    1494 log.go:172] (0xc000738370) Go away received\nI0226 11:33:11.730749    1494 log.go:172] (0xc000738370) (0xc0007c4640) Stream removed, broadcasting: 1\nI0226 11:33:11.730762    1494 log.go:172] (0xc000738370) (0xc000698f00) Stream removed, broadcasting: 3\nI0226 11:33:11.730776    1494 log.go:172] (0xc000738370) (0xc000546000) Stream removed, broadcasting: 5\n"
Feb 26 11:33:11.736: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 26 11:33:11.736: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 26 11:33:11.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:33:12.295: INFO: rc: 126
Feb 26 11:33:12.296: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   cannot exec in a stopped state: unknown
 I0226 11:33:12.239937    1517 log.go:172] (0xc0006c6370) (0xc0006ea640) Create stream
I0226 11:33:12.240210    1517 log.go:172] (0xc0006c6370) (0xc0006ea640) Stream added, broadcasting: 1
I0226 11:33:12.253711    1517 log.go:172] (0xc0006c6370) Reply frame received for 1
I0226 11:33:12.253770    1517 log.go:172] (0xc0006c6370) (0xc0005b8dc0) Create stream
I0226 11:33:12.253780    1517 log.go:172] (0xc0006c6370) (0xc0005b8dc0) Stream added, broadcasting: 3
I0226 11:33:12.255923    1517 log.go:172] (0xc0006c6370) Reply frame received for 3
I0226 11:33:12.256017    1517 log.go:172] (0xc0006c6370) (0xc000570000) Create stream
I0226 11:33:12.256029    1517 log.go:172] (0xc0006c6370) (0xc000570000) Stream added, broadcasting: 5
I0226 11:33:12.258034    1517 log.go:172] (0xc0006c6370) Reply frame received for 5
I0226 11:33:12.284141    1517 log.go:172] (0xc0006c6370) Data frame received for 3
I0226 11:33:12.284207    1517 log.go:172] (0xc0005b8dc0) (3) Data frame handling
I0226 11:33:12.284236    1517 log.go:172] (0xc0005b8dc0) (3) Data frame sent
I0226 11:33:12.286914    1517 log.go:172] (0xc0006c6370) Data frame received for 1
I0226 11:33:12.286930    1517 log.go:172] (0xc0006ea640) (1) Data frame handling
I0226 11:33:12.286945    1517 log.go:172] (0xc0006ea640) (1) Data frame sent
I0226 11:33:12.287137    1517 log.go:172] (0xc0006c6370) (0xc0005b8dc0) Stream removed, broadcasting: 3
I0226 11:33:12.287189    1517 log.go:172] (0xc0006c6370) (0xc0006ea640) Stream removed, broadcasting: 1
I0226 11:33:12.287287    1517 log.go:172] (0xc0006c6370) (0xc000570000) Stream removed, broadcasting: 5
I0226 11:33:12.287413    1517 log.go:172] (0xc0006c6370) Go away received
I0226 11:33:12.287456    1517 log.go:172] (0xc0006c6370) (0xc0006ea640) Stream removed, broadcasting: 1
I0226 11:33:12.287466    1517 log.go:172] (0xc0006c6370) (0xc0005b8dc0) Stream removed, broadcasting: 3
I0226 11:33:12.287479    1517 log.go:172] (0xc0006c6370) (0xc000570000) Stream removed, broadcasting: 5
command terminated with exit code 126
 []  0xc001a5f590 exit status 126   true [0xc001a5c150 0xc001a5c168 0xc001a5c180] [0xc001a5c150 0xc001a5c168 0xc001a5c180] [0xc001a5c160 0xc001a5c178] [0x935700 0x935700] 0xc001df48a0 }:
Command stdout:
cannot exec in a stopped state: unknown

stderr:
I0226 11:33:12.239937    1517 log.go:172] (0xc0006c6370) (0xc0006ea640) Create stream
I0226 11:33:12.240210    1517 log.go:172] (0xc0006c6370) (0xc0006ea640) Stream added, broadcasting: 1
I0226 11:33:12.253711    1517 log.go:172] (0xc0006c6370) Reply frame received for 1
I0226 11:33:12.253770    1517 log.go:172] (0xc0006c6370) (0xc0005b8dc0) Create stream
I0226 11:33:12.253780    1517 log.go:172] (0xc0006c6370) (0xc0005b8dc0) Stream added, broadcasting: 3
I0226 11:33:12.255923    1517 log.go:172] (0xc0006c6370) Reply frame received for 3
I0226 11:33:12.256017    1517 log.go:172] (0xc0006c6370) (0xc000570000) Create stream
I0226 11:33:12.256029    1517 log.go:172] (0xc0006c6370) (0xc000570000) Stream added, broadcasting: 5
I0226 11:33:12.258034    1517 log.go:172] (0xc0006c6370) Reply frame received for 5
I0226 11:33:12.284141    1517 log.go:172] (0xc0006c6370) Data frame received for 3
I0226 11:33:12.284207    1517 log.go:172] (0xc0005b8dc0) (3) Data frame handling
I0226 11:33:12.284236    1517 log.go:172] (0xc0005b8dc0) (3) Data frame sent
I0226 11:33:12.286914    1517 log.go:172] (0xc0006c6370) Data frame received for 1
I0226 11:33:12.286930    1517 log.go:172] (0xc0006ea640) (1) Data frame handling
I0226 11:33:12.286945    1517 log.go:172] (0xc0006ea640) (1) Data frame sent
I0226 11:33:12.287137    1517 log.go:172] (0xc0006c6370) (0xc0005b8dc0) Stream removed, broadcasting: 3
I0226 11:33:12.287189    1517 log.go:172] (0xc0006c6370) (0xc0006ea640) Stream removed, broadcasting: 1
I0226 11:33:12.287287    1517 log.go:172] (0xc0006c6370) (0xc000570000) Stream removed, broadcasting: 5
I0226 11:33:12.287413    1517 log.go:172] (0xc0006c6370) Go away received
I0226 11:33:12.287456    1517 log.go:172] (0xc0006c6370) (0xc0006ea640) Stream removed, broadcasting: 1
I0226 11:33:12.287466    1517 log.go:172] (0xc0006c6370) (0xc0005b8dc0) Stream removed, broadcasting: 3
I0226 11:33:12.287479    1517 log.go:172] (0xc0006c6370) (0xc000570000) Stream removed, broadcasting: 5
command terminated with exit code 126

error:
exit status 126

Feb 26 11:33:22.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:33:22.592: INFO: rc: 1
Feb 26 11:33:22.593: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc000cfecc0 exit status 1   true [0xc00016f1d0 0xc00016f208 0xc00016f2a0] [0xc00016f1d0 0xc00016f208 0xc00016f2a0] [0xc00016f200 0xc00016f278] [0x935700 0x935700] 0xc002411860 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb 26 11:33:32.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:33:32.787: INFO: rc: 1
Feb 26 11:33:32.787: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000cfee10 exit status 1   true [0xc00016f2d0 0xc00016f340 0xc00016f378] [0xc00016f2d0 0xc00016f340 0xc00016f378] [0xc00016f2f8 0xc00016f370] [0x935700 0x935700] 0xc002411b00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:33:42.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:33:42.915: INFO: rc: 1
Feb 26 11:33:42.916: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001172f60 exit status 1   true [0xc0018e81c0 0xc0018e81d8 0xc0018e81f0] [0xc0018e81c0 0xc0018e81d8 0xc0018e81f0] [0xc0018e81d0 0xc0018e81e8] [0x935700 0x935700] 0xc001db6b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:33:52.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:33:53.120: INFO: rc: 1
Feb 26 11:33:53.120: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000cfef90 exit status 1   true [0xc00016f388 0xc00016f3a8 0xc00016f3c0] [0xc00016f388 0xc00016f3a8 0xc00016f3c0] [0xc00016f3a0 0xc00016f3b8] [0x935700 0x935700] 0xc002411da0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:34:03.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:34:03.300: INFO: rc: 1
Feb 26 11:34:03.301: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001173230 exit status 1   true [0xc0018e81f8 0xc0018e8210 0xc0018e8228] [0xc0018e81f8 0xc0018e8210 0xc0018e8228] [0xc0018e8208 0xc0018e8220] [0x935700 0x935700] 0xc001db6ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:34:13.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:34:13.458: INFO: rc: 1
Feb 26 11:34:13.459: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001173350 exit status 1   true [0xc0018e8230 0xc0018e8248 0xc0018e8260] [0xc0018e8230 0xc0018e8248 0xc0018e8260] [0xc0018e8240 0xc0018e8258] [0x935700 0x935700] 0xc001db7260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:34:23.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:34:23.657: INFO: rc: 1
Feb 26 11:34:23.658: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a62210 exit status 1   true [0xc000dc8108 0xc000dc8288 0xc000dc8388] [0xc000dc8108 0xc000dc8288 0xc000dc8388] [0xc000dc8230 0xc000dc8348] [0x935700 0x935700] 0xc001ccc2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:34:33.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:34:33.829: INFO: rc: 1
Feb 26 11:34:33.829: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a62360 exit status 1   true [0xc000dc8400 0xc000dc84b0 0xc000dc85a8] [0xc000dc8400 0xc000dc84b0 0xc000dc85a8] [0xc000dc8468 0xc000dc8548] [0x935700 0x935700] 0xc001ccc540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:34:43.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:34:43.961: INFO: rc: 1
Feb 26 11:34:43.962: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a624e0 exit status 1   true [0xc000dc85c8 0xc000dc8640 0xc000dc8710] [0xc000dc85c8 0xc000dc8640 0xc000dc8710] [0xc000dc8600 0xc000dc86c8] [0x935700 0x935700] 0xc001ccc960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:34:53.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:34:54.117: INFO: rc: 1
Feb 26 11:34:54.117: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a62660 exit status 1   true [0xc000dc8730 0xc000dc87e8 0xc000dc88c8] [0xc000dc8730 0xc000dc87e8 0xc000dc88c8] [0xc000dc8780 0xc000dc88b0] [0x935700 0x935700] 0xc001cccc00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:35:04.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:35:04.229: INFO: rc: 1
Feb 26 11:35:04.230: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00147e150 exit status 1   true [0xc001a5c000 0xc001a5c018 0xc001a5c030] [0xc001a5c000 0xc001a5c018 0xc001a5c030] [0xc001a5c010 0xc001a5c028] [0x935700 0x935700] 0xc0020e0c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:35:14.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:35:14.364: INFO: rc: 1
Feb 26 11:35:14.364: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00147e3f0 exit status 1   true [0xc001a5c038 0xc001a5c050 0xc001a5c068] [0xc001a5c038 0xc001a5c050 0xc001a5c068] [0xc001a5c048 0xc001a5c060] [0x935700 0x935700] 0xc0020e1560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:35:24.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:35:24.559: INFO: rc: 1
Feb 26 11:35:24.559: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a62780 exit status 1   true [0xc000dc8928 0xc000dc89f0 0xc000dc8b18] [0xc000dc8928 0xc000dc89f0 0xc000dc8b18] [0xc000dc89d0 0xc000dc8aa0] [0x935700 0x935700] 0xc001ffe360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:35:34.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:35:34.659: INFO: rc: 1
Feb 26 11:35:34.659: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a628a0 exit status 1   true [0xc000dc8b68 0xc000dc8ce8 0xc000dc8dd8] [0xc000dc8b68 0xc000dc8ce8 0xc000dc8dd8] [0xc000dc8c30 0xc000dc8db8] [0x935700 0x935700] 0xc001ffe9c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:35:44.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:35:44.810: INFO: rc: 1
Feb 26 11:35:44.810: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00147e540 exit status 1   true [0xc001a5c070 0xc001a5c088 0xc001a5c0a0] [0xc001a5c070 0xc001a5c088 0xc001a5c0a0] [0xc001a5c080 0xc001a5c098] [0x935700 0x935700] 0xc0020e1800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:35:54.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:35:54.944: INFO: rc: 1
Feb 26 11:35:54.944: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00147e690 exit status 1   true [0xc001a5c0a8 0xc001a5c0c0 0xc001a5c0d8] [0xc001a5c0a8 0xc001a5c0c0 0xc001a5c0d8] [0xc001a5c0b8 0xc001a5c0d0] [0x935700 0x935700] 0xc0020e1aa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:36:04.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:36:05.105: INFO: rc: 1
Feb 26 11:36:05.105: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a62a50 exit status 1   true [0xc000dc8e20 0xc000dc8f28 0xc000dc9000] [0xc000dc8e20 0xc000dc8f28 0xc000dc9000] [0xc000dc8ef0 0xc000dc8fe8] [0x935700 0x935700] 0xc001ffecc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:36:15.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:36:15.261: INFO: rc: 1
Feb 26 11:36:15.261: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0018063c0 exit status 1   true [0xc00016e000 0xc00016ecb0 0xc00016ece8] [0xc00016e000 0xc00016ecb0 0xc00016ece8] [0xc00016ebe0 0xc00016ece0] [0x935700 0x935700] 0xc001df42a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:36:25.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:36:25.409: INFO: rc: 1
Feb 26 11:36:25.410: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a62cc0 exit status 1   true [0xc000dc90f8 0xc000dc9190 0xc000dc9248] [0xc000dc90f8 0xc000dc9190 0xc000dc9248] [0xc000dc9158 0xc000dc9210] [0x935700 0x935700] 0xc001ffef60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:36:35.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:36:35.560: INFO: rc: 1
Feb 26 11:36:35.560: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001806330 exit status 1   true [0xc00000e1f8 0xc001a5c010 0xc001a5c028] [0xc00000e1f8 0xc001a5c010 0xc001a5c028] [0xc001a5c008 0xc001a5c020] [0x935700 0x935700] 0xc001ccc2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:36:45.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:36:45.712: INFO: rc: 1
Feb 26 11:36:45.713: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c4a120 exit status 1   true [0xc000dc8108 0xc000dc8288 0xc000dc8388] [0xc000dc8108 0xc000dc8288 0xc000dc8388] [0xc000dc8230 0xc000dc8348] [0x935700 0x935700] 0xc001ffe840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:36:55.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:36:55.895: INFO: rc: 1
Feb 26 11:36:55.895: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c4a240 exit status 1   true [0xc000dc8400 0xc000dc84b0 0xc000dc85a8] [0xc000dc8400 0xc000dc84b0 0xc000dc85a8] [0xc000dc8468 0xc000dc8548] [0x935700 0x935700] 0xc001ffec00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:37:05.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:37:06.099: INFO: rc: 1
Feb 26 11:37:06.099: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00147e120 exit status 1   true [0xc00016e000 0xc00016ecb0 0xc00016ece8] [0xc00016e000 0xc00016ecb0 0xc00016ece8] [0xc00016ebe0 0xc00016ece0] [0x935700 0x935700] 0xc001df42a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:37:16.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:37:16.267: INFO: rc: 1
Feb 26 11:37:16.268: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00147e420 exit status 1   true [0xc00016ed88 0xc00016eea8 0xc00016ef58] [0xc00016ed88 0xc00016eea8 0xc00016ef58] [0xc00016ee90 0xc00016ef10] [0x935700 0x935700] 0xc001df4720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:37:26.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:37:26.398: INFO: rc: 1
Feb 26 11:37:26.399: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00147e5d0 exit status 1   true [0xc00016efc8 0xc00016f088 0xc00016f0e0] [0xc00016efc8 0xc00016f088 0xc00016f0e0] [0xc00016f070 0xc00016f0d8] [0x935700 0x935700] 0xc001df4b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:37:36.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:37:36.555: INFO: rc: 1
Feb 26 11:37:36.556: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001806540 exit status 1   true [0xc001a5c030 0xc001a5c048 0xc001a5c060] [0xc001a5c030 0xc001a5c048 0xc001a5c060] [0xc001a5c040 0xc001a5c058] [0x935700 0x935700] 0xc001ccc540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:37:46.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:37:46.703: INFO: rc: 1
Feb 26 11:37:46.703: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000a622d0 exit status 1   true [0xc0018e8000 0xc0018e8018 0xc0018e8030] [0xc0018e8000 0xc0018e8018 0xc0018e8030] [0xc0018e8010 0xc0018e8028] [0x935700 0x935700] 0xc0020e0c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:37:56.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:37:56.796: INFO: rc: 1
Feb 26 11:37:56.796: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000c4a390 exit status 1   true [0xc000dc85c8 0xc000dc8640 0xc000dc8710] [0xc000dc85c8 0xc000dc8640 0xc000dc8710] [0xc000dc8600 0xc000dc86c8] [0x935700 0x935700] 0xc001fff140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:38:06.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:38:06.951: INFO: rc: 1
Feb 26 11:38:06.952: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00147e780 exit status 1   true [0xc00016f0e8 0xc00016f158 0xc00016f180] [0xc00016f0e8 0xc00016f158 0xc00016f180] [0xc00016f148 0xc00016f178] [0x935700 0x935700] 0xc001df51a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 26 11:38:16.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-b4lwp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:38:17.085: INFO: rc: 1
Feb 26 11:38:17.086: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Feb 26 11:38:17.086: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 26 11:38:17.106: INFO: Deleting all statefulset in ns e2e-tests-statefulset-b4lwp
Feb 26 11:38:17.108: INFO: Scaling statefulset ss to 0
Feb 26 11:38:17.131: INFO: Waiting for statefulset status.replicas updated to 0
Feb 26 11:38:17.136: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:38:17.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-b4lwp" for this suite.
Feb 26 11:38:25.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:38:25.429: INFO: namespace: e2e-tests-statefulset-b4lwp, resource: bindings, ignored listing per whitelist
Feb 26 11:38:25.500: INFO: namespace e2e-tests-statefulset-b4lwp deletion completed in 8.252512849s

• [SLOW TEST:390.532 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:38:25.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Feb 26 11:38:35.800: INFO: Pod pod-hostip-8029aeee-588c-11ea-8134-0242ac110008 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:38:35.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-bbbzq" for this suite.
Feb 26 11:38:59.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:39:00.017: INFO: namespace: e2e-tests-pods-bbbzq, resource: bindings, ignored listing per whitelist
Feb 26 11:39:00.027: INFO: namespace e2e-tests-pods-bbbzq deletion completed in 24.216927415s

• [SLOW TEST:34.527 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:39:00.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 26 11:39:00.285: INFO: Waiting up to 5m0s for pod "downward-api-94c03140-588c-11ea-8134-0242ac110008" in namespace "e2e-tests-downward-api-cttcp" to be "success or failure"
Feb 26 11:39:00.292: INFO: Pod "downward-api-94c03140-588c-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.080178ms
Feb 26 11:39:03.010: INFO: Pod "downward-api-94c03140-588c-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.724750742s
Feb 26 11:39:05.019: INFO: Pod "downward-api-94c03140-588c-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.734244824s
Feb 26 11:39:07.794: INFO: Pod "downward-api-94c03140-588c-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.508917389s
Feb 26 11:39:09.807: INFO: Pod "downward-api-94c03140-588c-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.521788772s
Feb 26 11:39:11.823: INFO: Pod "downward-api-94c03140-588c-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.538133577s
STEP: Saw pod success
Feb 26 11:39:11.823: INFO: Pod "downward-api-94c03140-588c-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 11:39:11.828: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-94c03140-588c-11ea-8134-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 26 11:39:12.559: INFO: Waiting for pod downward-api-94c03140-588c-11ea-8134-0242ac110008 to disappear
Feb 26 11:39:12.579: INFO: Pod downward-api-94c03140-588c-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:39:12.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-cttcp" for this suite.
Feb 26 11:39:18.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:39:18.769: INFO: namespace: e2e-tests-downward-api-cttcp, resource: bindings, ignored listing per whitelist
Feb 26 11:39:18.916: INFO: namespace e2e-tests-downward-api-cttcp deletion completed in 6.317524157s

• [SLOW TEST:18.889 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:39:18.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-a01cf6d5-588c-11ea-8134-0242ac110008
STEP: Creating secret with name s-test-opt-upd-a01cf7df-588c-11ea-8134-0242ac110008
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-a01cf6d5-588c-11ea-8134-0242ac110008
STEP: Updating secret s-test-opt-upd-a01cf7df-588c-11ea-8134-0242ac110008
STEP: Creating secret with name s-test-opt-create-a01cf82c-588c-11ea-8134-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:40:40.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jnnb8" for this suite.
Feb 26 11:41:04.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:41:04.389: INFO: namespace: e2e-tests-projected-jnnb8, resource: bindings, ignored listing per whitelist
Feb 26 11:41:04.436: INFO: namespace e2e-tests-projected-jnnb8 deletion completed in 24.303507496s

• [SLOW TEST:105.519 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:41:04.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-4pwkk
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Feb 26 11:41:04.810: INFO: Found 0 stateful pods, waiting for 3
Feb 26 11:41:14.818: INFO: Found 1 stateful pods, waiting for 3
Feb 26 11:41:24.833: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 11:41:24.833: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 11:41:24.833: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 26 11:41:34.826: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 11:41:34.826: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 11:41:34.826: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 11:41:34.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4pwkk ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 26 11:41:35.388: INFO: stderr: "I0226 11:41:35.031493    2137 log.go:172] (0xc000776160) (0xc0006105a0) Create stream\nI0226 11:41:35.031660    2137 log.go:172] (0xc000776160) (0xc0006105a0) Stream added, broadcasting: 1\nI0226 11:41:35.038346    2137 log.go:172] (0xc000776160) Reply frame received for 1\nI0226 11:41:35.038372    2137 log.go:172] (0xc000776160) (0xc00087e0a0) Create stream\nI0226 11:41:35.038379    2137 log.go:172] (0xc000776160) (0xc00087e0a0) Stream added, broadcasting: 3\nI0226 11:41:35.039498    2137 log.go:172] (0xc000776160) Reply frame received for 3\nI0226 11:41:35.039528    2137 log.go:172] (0xc000776160) (0xc00086a0a0) Create stream\nI0226 11:41:35.039543    2137 log.go:172] (0xc000776160) (0xc00086a0a0) Stream added, broadcasting: 5\nI0226 11:41:35.040485    2137 log.go:172] (0xc000776160) Reply frame received for 5\nI0226 11:41:35.210379    2137 log.go:172] (0xc000776160) Data frame received for 3\nI0226 11:41:35.210461    2137 log.go:172] (0xc00087e0a0) (3) Data frame handling\nI0226 11:41:35.210479    2137 log.go:172] (0xc00087e0a0) (3) Data frame sent\nI0226 11:41:35.378258    2137 log.go:172] (0xc000776160) (0xc00086a0a0) Stream removed, broadcasting: 5\nI0226 11:41:35.378871    2137 log.go:172] (0xc000776160) Data frame received for 1\nI0226 11:41:35.378912    2137 log.go:172] (0xc000776160) (0xc00087e0a0) Stream removed, broadcasting: 3\nI0226 11:41:35.378957    2137 log.go:172] (0xc0006105a0) (1) Data frame handling\nI0226 11:41:35.378983    2137 log.go:172] (0xc0006105a0) (1) Data frame sent\nI0226 11:41:35.378994    2137 log.go:172] (0xc000776160) (0xc0006105a0) Stream removed, broadcasting: 1\nI0226 11:41:35.379018    2137 log.go:172] (0xc000776160) Go away received\nI0226 11:41:35.379415    2137 log.go:172] (0xc000776160) (0xc0006105a0) Stream removed, broadcasting: 1\nI0226 11:41:35.379433    2137 log.go:172] (0xc000776160) (0xc00087e0a0) Stream removed, broadcasting: 3\nI0226 11:41:35.379441    2137 log.go:172] (0xc000776160) (0xc00086a0a0) Stream removed, broadcasting: 5\n"
Feb 26 11:41:35.388: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 26 11:41:35.389: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 26 11:41:35.431: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 26 11:41:45.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4pwkk ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:41:46.040: INFO: stderr: "I0226 11:41:45.755209    2158 log.go:172] (0xc00073e370) (0xc0006794a0) Create stream\nI0226 11:41:45.755617    2158 log.go:172] (0xc00073e370) (0xc0006794a0) Stream added, broadcasting: 1\nI0226 11:41:45.761775    2158 log.go:172] (0xc00073e370) Reply frame received for 1\nI0226 11:41:45.761817    2158 log.go:172] (0xc00073e370) (0xc000522460) Create stream\nI0226 11:41:45.761831    2158 log.go:172] (0xc00073e370) (0xc000522460) Stream added, broadcasting: 3\nI0226 11:41:45.762834    2158 log.go:172] (0xc00073e370) Reply frame received for 3\nI0226 11:41:45.762927    2158 log.go:172] (0xc00073e370) (0xc000306000) Create stream\nI0226 11:41:45.762943    2158 log.go:172] (0xc00073e370) (0xc000306000) Stream added, broadcasting: 5\nI0226 11:41:45.764153    2158 log.go:172] (0xc00073e370) Reply frame received for 5\nI0226 11:41:45.892183    2158 log.go:172] (0xc00073e370) Data frame received for 3\nI0226 11:41:45.892264    2158 log.go:172] (0xc000522460) (3) Data frame handling\nI0226 11:41:45.892297    2158 log.go:172] (0xc000522460) (3) Data frame sent\nI0226 11:41:46.025176    2158 log.go:172] (0xc00073e370) Data frame received for 1\nI0226 11:41:46.025285    2158 log.go:172] (0xc00073e370) (0xc000522460) Stream removed, broadcasting: 3\nI0226 11:41:46.025474    2158 log.go:172] (0xc0006794a0) (1) Data frame handling\nI0226 11:41:46.025527    2158 log.go:172] (0xc0006794a0) (1) Data frame sent\nI0226 11:41:46.025564    2158 log.go:172] (0xc00073e370) (0xc000306000) Stream removed, broadcasting: 5\nI0226 11:41:46.025630    2158 log.go:172] (0xc00073e370) (0xc0006794a0) Stream removed, broadcasting: 1\nI0226 11:41:46.025675    2158 log.go:172] (0xc00073e370) Go away received\nI0226 11:41:46.026048    2158 log.go:172] (0xc00073e370) (0xc0006794a0) Stream removed, broadcasting: 1\nI0226 11:41:46.026078    2158 log.go:172] (0xc00073e370) (0xc000522460) Stream removed, broadcasting: 3\nI0226 11:41:46.026104    2158 log.go:172] (0xc00073e370) (0xc000306000) Stream removed, broadcasting: 5\n"
Feb 26 11:41:46.041: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 26 11:41:46.041: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 26 11:41:56.116: INFO: Waiting for StatefulSet e2e-tests-statefulset-4pwkk/ss2 to complete update
Feb 26 11:41:56.116: INFO: Waiting for Pod e2e-tests-statefulset-4pwkk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 26 11:41:56.116: INFO: Waiting for Pod e2e-tests-statefulset-4pwkk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 26 11:41:56.116: INFO: Waiting for Pod e2e-tests-statefulset-4pwkk/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 26 11:42:06.161: INFO: Waiting for StatefulSet e2e-tests-statefulset-4pwkk/ss2 to complete update
Feb 26 11:42:06.161: INFO: Waiting for Pod e2e-tests-statefulset-4pwkk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 26 11:42:06.161: INFO: Waiting for Pod e2e-tests-statefulset-4pwkk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 26 11:42:16.158: INFO: Waiting for StatefulSet e2e-tests-statefulset-4pwkk/ss2 to complete update
Feb 26 11:42:16.158: INFO: Waiting for Pod e2e-tests-statefulset-4pwkk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 26 11:42:16.158: INFO: Waiting for Pod e2e-tests-statefulset-4pwkk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 26 11:42:26.187: INFO: Waiting for StatefulSet e2e-tests-statefulset-4pwkk/ss2 to complete update
Feb 26 11:42:26.187: INFO: Waiting for Pod e2e-tests-statefulset-4pwkk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 26 11:42:36.602: INFO: Waiting for StatefulSet e2e-tests-statefulset-4pwkk/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 26 11:42:46.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4pwkk ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 26 11:42:46.882: INFO: stderr: "I0226 11:42:46.394886    2180 log.go:172] (0xc0006fe0b0) (0xc0007265a0) Create stream\nI0226 11:42:46.395114    2180 log.go:172] (0xc0006fe0b0) (0xc0007265a0) Stream added, broadcasting: 1\nI0226 11:42:46.401484    2180 log.go:172] (0xc0006fe0b0) Reply frame received for 1\nI0226 11:42:46.401526    2180 log.go:172] (0xc0006fe0b0) (0xc0005dcc80) Create stream\nI0226 11:42:46.401537    2180 log.go:172] (0xc0006fe0b0) (0xc0005dcc80) Stream added, broadcasting: 3\nI0226 11:42:46.402721    2180 log.go:172] (0xc0006fe0b0) Reply frame received for 3\nI0226 11:42:46.402749    2180 log.go:172] (0xc0006fe0b0) (0xc000726640) Create stream\nI0226 11:42:46.402755    2180 log.go:172] (0xc0006fe0b0) (0xc000726640) Stream added, broadcasting: 5\nI0226 11:42:46.403920    2180 log.go:172] (0xc0006fe0b0) Reply frame received for 5\nI0226 11:42:46.713879    2180 log.go:172] (0xc0006fe0b0) Data frame received for 3\nI0226 11:42:46.713936    2180 log.go:172] (0xc0005dcc80) (3) Data frame handling\nI0226 11:42:46.713970    2180 log.go:172] (0xc0005dcc80) (3) Data frame sent\nI0226 11:42:46.868027    2180 log.go:172] (0xc0006fe0b0) Data frame received for 1\nI0226 11:42:46.868131    2180 log.go:172] (0xc0006fe0b0) (0xc0005dcc80) Stream removed, broadcasting: 3\nI0226 11:42:46.868172    2180 log.go:172] (0xc0007265a0) (1) Data frame handling\nI0226 11:42:46.868213    2180 log.go:172] (0xc0007265a0) (1) Data frame sent\nI0226 11:42:46.868340    2180 log.go:172] (0xc0006fe0b0) (0xc000726640) Stream removed, broadcasting: 5\nI0226 11:42:46.868417    2180 log.go:172] (0xc0006fe0b0) (0xc0007265a0) Stream removed, broadcasting: 1\nI0226 11:42:46.868452    2180 log.go:172] (0xc0006fe0b0) Go away received\nI0226 11:42:46.869161    2180 log.go:172] (0xc0006fe0b0) (0xc0007265a0) Stream removed, broadcasting: 1\nI0226 11:42:46.869224    2180 log.go:172] (0xc0006fe0b0) (0xc0005dcc80) Stream removed, broadcasting: 3\nI0226 11:42:46.869242    2180 log.go:172] (0xc0006fe0b0) (0xc000726640) Stream removed, broadcasting: 5\n"
Feb 26 11:42:46.882: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 26 11:42:46.882: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 26 11:42:56.973: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 26 11:43:07.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4pwkk ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:43:08.341: INFO: stderr: "I0226 11:43:07.418827    2202 log.go:172] (0xc00073c370) (0xc000760640) Create stream\nI0226 11:43:07.419106    2202 log.go:172] (0xc00073c370) (0xc000760640) Stream added, broadcasting: 1\nI0226 11:43:07.426654    2202 log.go:172] (0xc00073c370) Reply frame received for 1\nI0226 11:43:07.426712    2202 log.go:172] (0xc00073c370) (0xc000428b40) Create stream\nI0226 11:43:07.426726    2202 log.go:172] (0xc00073c370) (0xc000428b40) Stream added, broadcasting: 3\nI0226 11:43:07.427924    2202 log.go:172] (0xc00073c370) Reply frame received for 3\nI0226 11:43:07.427948    2202 log.go:172] (0xc00073c370) (0xc0007606e0) Create stream\nI0226 11:43:07.427957    2202 log.go:172] (0xc00073c370) (0xc0007606e0) Stream added, broadcasting: 5\nI0226 11:43:07.429529    2202 log.go:172] (0xc00073c370) Reply frame received for 5\nI0226 11:43:07.745442    2202 log.go:172] (0xc00073c370) Data frame received for 3\nI0226 11:43:07.745548    2202 log.go:172] (0xc000428b40) (3) Data frame handling\nI0226 11:43:07.745573    2202 log.go:172] (0xc000428b40) (3) Data frame sent\nI0226 11:43:08.328971    2202 log.go:172] (0xc00073c370) (0xc000428b40) Stream removed, broadcasting: 3\nI0226 11:43:08.329731    2202 log.go:172] (0xc00073c370) Data frame received for 1\nI0226 11:43:08.329853    2202 log.go:172] (0xc000760640) (1) Data frame handling\nI0226 11:43:08.330177    2202 log.go:172] (0xc000760640) (1) Data frame sent\nI0226 11:43:08.330283    2202 log.go:172] (0xc00073c370) (0xc0007606e0) Stream removed, broadcasting: 5\nI0226 11:43:08.330425    2202 log.go:172] (0xc00073c370) (0xc000760640) Stream removed, broadcasting: 1\nI0226 11:43:08.330520    2202 log.go:172] (0xc00073c370) Go away received\nI0226 11:43:08.331337    2202 log.go:172] (0xc00073c370) (0xc000760640) Stream removed, broadcasting: 1\nI0226 11:43:08.331373    2202 log.go:172] (0xc00073c370) (0xc000428b40) Stream removed, broadcasting: 3\nI0226 11:43:08.331396    2202 log.go:172] (0xc00073c370) (0xc0007606e0) Stream removed, broadcasting: 5\n"
Feb 26 11:43:08.341: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 26 11:43:08.341: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 26 11:43:18.436: INFO: Waiting for StatefulSet e2e-tests-statefulset-4pwkk/ss2 to complete update
Feb 26 11:43:18.436: INFO: Waiting for Pod e2e-tests-statefulset-4pwkk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 26 11:43:18.436: INFO: Waiting for Pod e2e-tests-statefulset-4pwkk/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 26 11:43:18.436: INFO: Waiting for Pod e2e-tests-statefulset-4pwkk/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 26 11:43:28.505: INFO: Waiting for StatefulSet e2e-tests-statefulset-4pwkk/ss2 to complete update
Feb 26 11:43:28.505: INFO: Waiting for Pod e2e-tests-statefulset-4pwkk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 26 11:43:28.505: INFO: Waiting for Pod e2e-tests-statefulset-4pwkk/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 26 11:43:38.534: INFO: Waiting for StatefulSet e2e-tests-statefulset-4pwkk/ss2 to complete update
Feb 26 11:43:38.534: INFO: Waiting for Pod e2e-tests-statefulset-4pwkk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 26 11:43:48.486: INFO: Waiting for StatefulSet e2e-tests-statefulset-4pwkk/ss2 to complete update
Feb 26 11:43:48.493: INFO: Waiting for Pod e2e-tests-statefulset-4pwkk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 26 11:43:58.564: INFO: Waiting for StatefulSet e2e-tests-statefulset-4pwkk/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 26 11:44:08.478: INFO: Deleting all statefulset in ns e2e-tests-statefulset-4pwkk
Feb 26 11:44:08.495: INFO: Scaling statefulset ss2 to 0
Feb 26 11:44:48.625: INFO: Waiting for statefulset status.replicas updated to 0
Feb 26 11:44:48.636: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:44:48.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-4pwkk" for this suite.
Feb 26 11:44:56.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:44:57.072: INFO: namespace: e2e-tests-statefulset-4pwkk, resource: bindings, ignored listing per whitelist
Feb 26 11:44:57.074: INFO: namespace e2e-tests-statefulset-4pwkk deletion completed in 8.397545887s

• [SLOW TEST:232.638 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:44:57.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-t4j4r
I0226 11:44:57.354159       9 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-t4j4r, replica count: 1
I0226 11:44:58.405869       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 11:44:59.406638       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 11:45:00.407591       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 11:45:01.408681       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 11:45:02.409903       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 11:45:03.410728       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 11:45:04.411785       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 11:45:05.412697       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 11:45:06.413413       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0226 11:45:07.413974       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 26 11:45:07.575: INFO: Created: latency-svc-ppmxf
Feb 26 11:45:07.628: INFO: Got endpoints: latency-svc-ppmxf [113.783441ms]
Feb 26 11:45:07.751: INFO: Created: latency-svc-mrxbv
Feb 26 11:45:07.796: INFO: Got endpoints: latency-svc-mrxbv [166.765025ms]
Feb 26 11:45:07.886: INFO: Created: latency-svc-frc5f
Feb 26 11:45:07.921: INFO: Created: latency-svc-4m868
Feb 26 11:45:07.922: INFO: Got endpoints: latency-svc-frc5f [293.201696ms]
Feb 26 11:45:08.054: INFO: Got endpoints: latency-svc-4m868 [425.363552ms]
Feb 26 11:45:08.076: INFO: Created: latency-svc-m7gq4
Feb 26 11:45:08.091: INFO: Got endpoints: latency-svc-m7gq4 [461.343257ms]
Feb 26 11:45:08.297: INFO: Created: latency-svc-dv4qc
Feb 26 11:45:08.354: INFO: Got endpoints: latency-svc-dv4qc [725.332507ms]
Feb 26 11:45:08.392: INFO: Created: latency-svc-2s7f5
Feb 26 11:45:08.500: INFO: Got endpoints: latency-svc-2s7f5 [870.864675ms]
Feb 26 11:45:08.572: INFO: Created: latency-svc-6r6h7
Feb 26 11:45:08.682: INFO: Got endpoints: latency-svc-6r6h7 [1.052893654s]
Feb 26 11:45:08.694: INFO: Created: latency-svc-sct8r
Feb 26 11:45:08.712: INFO: Got endpoints: latency-svc-sct8r [1.082389879s]
Feb 26 11:45:08.788: INFO: Created: latency-svc-zsd6p
Feb 26 11:45:08.921: INFO: Got endpoints: latency-svc-zsd6p [1.29156926s]
Feb 26 11:45:08.948: INFO: Created: latency-svc-ksgn2
Feb 26 11:45:08.959: INFO: Got endpoints: latency-svc-ksgn2 [1.330027154s]
Feb 26 11:45:09.135: INFO: Created: latency-svc-zxx2f
Feb 26 11:45:09.161: INFO: Got endpoints: latency-svc-zxx2f [1.53170038s]
Feb 26 11:45:09.219: INFO: Created: latency-svc-22q46
Feb 26 11:45:09.221: INFO: Got endpoints: latency-svc-22q46 [1.590927154s]
Feb 26 11:45:09.424: INFO: Created: latency-svc-9pv5p
Feb 26 11:45:09.453: INFO: Got endpoints: latency-svc-9pv5p [1.823631203s]
Feb 26 11:45:09.569: INFO: Created: latency-svc-xkj67
Feb 26 11:45:09.604: INFO: Got endpoints: latency-svc-xkj67 [1.974570063s]
Feb 26 11:45:09.826: INFO: Created: latency-svc-srcf8
Feb 26 11:45:09.839: INFO: Got endpoints: latency-svc-srcf8 [2.210736519s]
Feb 26 11:45:09.898: INFO: Created: latency-svc-zv4w9
Feb 26 11:45:10.028: INFO: Got endpoints: latency-svc-zv4w9 [2.231172265s]
Feb 26 11:45:10.133: INFO: Created: latency-svc-f8qmj
Feb 26 11:45:10.287: INFO: Created: latency-svc-w7c66
Feb 26 11:45:10.295: INFO: Got endpoints: latency-svc-f8qmj [2.373267013s]
Feb 26 11:45:10.345: INFO: Got endpoints: latency-svc-w7c66 [2.290335622s]
Feb 26 11:45:10.348: INFO: Created: latency-svc-mhgtl
Feb 26 11:45:10.473: INFO: Got endpoints: latency-svc-mhgtl [2.382150284s]
Feb 26 11:45:10.543: INFO: Created: latency-svc-cs6m7
Feb 26 11:45:10.688: INFO: Got endpoints: latency-svc-cs6m7 [2.333974098s]
Feb 26 11:45:10.709: INFO: Created: latency-svc-9chpc
Feb 26 11:45:10.720: INFO: Got endpoints: latency-svc-9chpc [2.219091857s]
Feb 26 11:45:10.783: INFO: Created: latency-svc-xhsh5
Feb 26 11:45:10.880: INFO: Got endpoints: latency-svc-xhsh5 [2.197575847s]
Feb 26 11:45:10.895: INFO: Created: latency-svc-ncgts
Feb 26 11:45:10.901: INFO: Got endpoints: latency-svc-ncgts [2.188817877s]
Feb 26 11:45:11.172: INFO: Created: latency-svc-vzct2
Feb 26 11:45:11.187: INFO: Got endpoints: latency-svc-vzct2 [2.265589277s]
Feb 26 11:45:11.253: INFO: Created: latency-svc-t4d5h
Feb 26 11:45:11.328: INFO: Got endpoints: latency-svc-t4d5h [2.368481489s]
Feb 26 11:45:11.396: INFO: Created: latency-svc-6s7mr
Feb 26 11:45:11.437: INFO: Got endpoints: latency-svc-6s7mr [2.275869301s]
Feb 26 11:45:11.604: INFO: Created: latency-svc-b47cs
Feb 26 11:45:11.620: INFO: Got endpoints: latency-svc-b47cs [2.399433097s]
Feb 26 11:45:11.840: INFO: Created: latency-svc-p6dxx
Feb 26 11:45:11.910: INFO: Created: latency-svc-779xz
Feb 26 11:45:11.910: INFO: Got endpoints: latency-svc-p6dxx [2.456630082s]
Feb 26 11:45:12.077: INFO: Got endpoints: latency-svc-779xz [2.472211018s]
Feb 26 11:45:12.101: INFO: Created: latency-svc-nfh5t
Feb 26 11:45:12.127: INFO: Got endpoints: latency-svc-nfh5t [2.287654355s]
Feb 26 11:45:12.361: INFO: Created: latency-svc-v4qn5
Feb 26 11:45:12.381: INFO: Got endpoints: latency-svc-v4qn5 [2.352453846s]
Feb 26 11:45:12.457: INFO: Created: latency-svc-zspn2
Feb 26 11:45:12.624: INFO: Got endpoints: latency-svc-zspn2 [2.328636364s]
Feb 26 11:45:12.665: INFO: Created: latency-svc-26pww
Feb 26 11:45:12.708: INFO: Got endpoints: latency-svc-26pww [2.362406475s]
Feb 26 11:45:12.848: INFO: Created: latency-svc-vd7mj
Feb 26 11:45:12.888: INFO: Got endpoints: latency-svc-vd7mj [2.414387626s]
Feb 26 11:45:12.945: INFO: Created: latency-svc-ccl57
Feb 26 11:45:13.104: INFO: Got endpoints: latency-svc-ccl57 [2.415331159s]
Feb 26 11:45:13.140: INFO: Created: latency-svc-8dnzc
Feb 26 11:45:13.179: INFO: Got endpoints: latency-svc-8dnzc [2.459119612s]
Feb 26 11:45:13.354: INFO: Created: latency-svc-cmmtv
Feb 26 11:45:13.373: INFO: Got endpoints: latency-svc-cmmtv [2.491657963s]
Feb 26 11:45:13.418: INFO: Created: latency-svc-p96ff
Feb 26 11:45:13.552: INFO: Got endpoints: latency-svc-p96ff [2.650605937s]
Feb 26 11:45:13.572: INFO: Created: latency-svc-5d5tv
Feb 26 11:45:13.630: INFO: Got endpoints: latency-svc-5d5tv [2.442496587s]
Feb 26 11:45:13.740: INFO: Created: latency-svc-tnslj
Feb 26 11:45:13.758: INFO: Got endpoints: latency-svc-tnslj [2.428979563s]
Feb 26 11:45:13.926: INFO: Created: latency-svc-924l2
Feb 26 11:45:13.955: INFO: Got endpoints: latency-svc-924l2 [2.517231412s]
Feb 26 11:45:14.127: INFO: Created: latency-svc-hfbfn
Feb 26 11:45:14.150: INFO: Got endpoints: latency-svc-hfbfn [2.529556695s]
Feb 26 11:45:14.230: INFO: Created: latency-svc-gbfwd
Feb 26 11:45:14.335: INFO: Got endpoints: latency-svc-gbfwd [2.424274477s]
Feb 26 11:45:14.352: INFO: Created: latency-svc-vxsh2
Feb 26 11:45:14.391: INFO: Got endpoints: latency-svc-vxsh2 [2.313172136s]
Feb 26 11:45:14.625: INFO: Created: latency-svc-m8n96
Feb 26 11:45:14.634: INFO: Got endpoints: latency-svc-m8n96 [2.507150556s]
Feb 26 11:45:14.700: INFO: Created: latency-svc-w25kh
Feb 26 11:45:14.802: INFO: Got endpoints: latency-svc-w25kh [2.421536656s]
Feb 26 11:45:14.827: INFO: Created: latency-svc-h8284
Feb 26 11:45:14.861: INFO: Got endpoints: latency-svc-h8284 [2.236279553s]
Feb 26 11:45:14.885: INFO: Created: latency-svc-rq6ss
Feb 26 11:45:14.902: INFO: Got endpoints: latency-svc-rq6ss [2.19339072s]
Feb 26 11:45:15.063: INFO: Created: latency-svc-dcb9n
Feb 26 11:45:15.097: INFO: Got endpoints: latency-svc-dcb9n [2.208360887s]
Feb 26 11:45:15.141: INFO: Created: latency-svc-7rpbx
Feb 26 11:45:15.264: INFO: Got endpoints: latency-svc-7rpbx [2.159100623s]
Feb 26 11:45:15.291: INFO: Created: latency-svc-qs8s9
Feb 26 11:45:15.303: INFO: Got endpoints: latency-svc-qs8s9 [2.123748936s]
Feb 26 11:45:15.364: INFO: Created: latency-svc-zt88v
Feb 26 11:45:15.449: INFO: Got endpoints: latency-svc-zt88v [2.076488088s]
Feb 26 11:45:15.469: INFO: Created: latency-svc-n4jzk
Feb 26 11:45:15.483: INFO: Got endpoints: latency-svc-n4jzk [1.930872229s]
Feb 26 11:45:15.526: INFO: Created: latency-svc-5zt5g
Feb 26 11:45:15.541: INFO: Got endpoints: latency-svc-5zt5g [1.911120972s]
Feb 26 11:45:15.658: INFO: Created: latency-svc-tk7nq
Feb 26 11:45:15.686: INFO: Got endpoints: latency-svc-tk7nq [1.927904863s]
Feb 26 11:45:15.845: INFO: Created: latency-svc-48n8v
Feb 26 11:45:16.383: INFO: Got endpoints: latency-svc-48n8v [2.427981215s]
Feb 26 11:45:16.435: INFO: Created: latency-svc-jddhb
Feb 26 11:45:16.479: INFO: Got endpoints: latency-svc-jddhb [2.328635127s]
Feb 26 11:45:16.667: INFO: Created: latency-svc-9dtkx
Feb 26 11:45:16.691: INFO: Got endpoints: latency-svc-9dtkx [2.356027082s]
Feb 26 11:45:16.901: INFO: Created: latency-svc-85j7g
Feb 26 11:45:16.922: INFO: Got endpoints: latency-svc-85j7g [2.531136232s]
Feb 26 11:45:17.043: INFO: Created: latency-svc-72ln9
Feb 26 11:45:17.070: INFO: Got endpoints: latency-svc-72ln9 [2.435047735s]
Feb 26 11:45:17.119: INFO: Created: latency-svc-rvvr6
Feb 26 11:45:17.243: INFO: Got endpoints: latency-svc-rvvr6 [2.440373608s]
Feb 26 11:45:17.287: INFO: Created: latency-svc-5pf88
Feb 26 11:45:17.321: INFO: Got endpoints: latency-svc-5pf88 [2.460025956s]
Feb 26 11:45:17.443: INFO: Created: latency-svc-f77cm
Feb 26 11:45:17.471: INFO: Got endpoints: latency-svc-f77cm [2.568291438s]
Feb 26 11:45:17.658: INFO: Created: latency-svc-d86xq
Feb 26 11:45:17.837: INFO: Created: latency-svc-q7g88
Feb 26 11:45:17.850: INFO: Got endpoints: latency-svc-d86xq [2.752191703s]
Feb 26 11:45:17.880: INFO: Got endpoints: latency-svc-q7g88 [2.615692803s]
Feb 26 11:45:18.054: INFO: Created: latency-svc-8srmt
Feb 26 11:45:18.075: INFO: Got endpoints: latency-svc-8srmt [2.77069491s]
Feb 26 11:45:18.109: INFO: Created: latency-svc-n8pml
Feb 26 11:45:18.126: INFO: Got endpoints: latency-svc-n8pml [2.676688495s]
Feb 26 11:45:18.228: INFO: Created: latency-svc-jcn6c
Feb 26 11:45:18.251: INFO: Got endpoints: latency-svc-jcn6c [2.768270973s]
Feb 26 11:45:18.425: INFO: Created: latency-svc-qgd4q
Feb 26 11:45:18.447: INFO: Got endpoints: latency-svc-qgd4q [2.905120598s]
Feb 26 11:45:18.602: INFO: Created: latency-svc-md4bs
Feb 26 11:45:18.623: INFO: Got endpoints: latency-svc-md4bs [2.936786487s]
Feb 26 11:45:18.794: INFO: Created: latency-svc-lb8cr
Feb 26 11:45:18.874: INFO: Got endpoints: latency-svc-lb8cr [2.490170718s]
Feb 26 11:45:19.049: INFO: Created: latency-svc-85fzw
Feb 26 11:45:19.077: INFO: Got endpoints: latency-svc-85fzw [2.596717395s]
Feb 26 11:45:19.113: INFO: Created: latency-svc-d8p6g
Feb 26 11:45:19.265: INFO: Got endpoints: latency-svc-d8p6g [2.572990185s]
Feb 26 11:45:19.326: INFO: Created: latency-svc-87vxt
Feb 26 11:45:19.407: INFO: Got endpoints: latency-svc-87vxt [2.485144548s]
Feb 26 11:45:19.420: INFO: Created: latency-svc-btdsx
Feb 26 11:45:19.467: INFO: Got endpoints: latency-svc-btdsx [2.396377047s]
Feb 26 11:45:19.598: INFO: Created: latency-svc-8rfsj
Feb 26 11:45:19.602: INFO: Got endpoints: latency-svc-8rfsj [2.358354163s]
Feb 26 11:45:19.647: INFO: Created: latency-svc-bj4lb
Feb 26 11:45:19.659: INFO: Got endpoints: latency-svc-bj4lb [2.336380928s]
Feb 26 11:45:19.756: INFO: Created: latency-svc-pggqd
Feb 26 11:45:19.790: INFO: Got endpoints: latency-svc-pggqd [2.31893729s]
Feb 26 11:45:19.811: INFO: Created: latency-svc-k446f
Feb 26 11:45:19.823: INFO: Got endpoints: latency-svc-k446f [1.972529265s]
Feb 26 11:45:19.943: INFO: Created: latency-svc-25drd
Feb 26 11:45:19.954: INFO: Got endpoints: latency-svc-25drd [2.073688569s]
Feb 26 11:45:20.047: INFO: Created: latency-svc-czjj4
Feb 26 11:45:20.111: INFO: Got endpoints: latency-svc-czjj4 [2.036648169s]
Feb 26 11:45:20.138: INFO: Created: latency-svc-knzlc
Feb 26 11:45:20.164: INFO: Got endpoints: latency-svc-knzlc [2.03731491s]
Feb 26 11:45:20.227: INFO: Created: latency-svc-d858d
Feb 26 11:45:20.246: INFO: Got endpoints: latency-svc-d858d [1.994610817s]
Feb 26 11:45:20.390: INFO: Created: latency-svc-d2fng
Feb 26 11:45:20.416: INFO: Got endpoints: latency-svc-d2fng [1.968546473s]
Feb 26 11:45:20.478: INFO: Created: latency-svc-xrrcm
Feb 26 11:45:20.499: INFO: Got endpoints: latency-svc-xrrcm [1.87616012s]
Feb 26 11:45:20.640: INFO: Created: latency-svc-98rsr
Feb 26 11:45:20.660: INFO: Got endpoints: latency-svc-98rsr [1.78587585s]
Feb 26 11:45:20.718: INFO: Created: latency-svc-vc6gw
Feb 26 11:45:20.718: INFO: Got endpoints: latency-svc-vc6gw [1.640300871s]
Feb 26 11:45:20.824: INFO: Created: latency-svc-5gtvg
Feb 26 11:45:20.836: INFO: Got endpoints: latency-svc-5gtvg [1.57091767s]
Feb 26 11:45:20.890: INFO: Created: latency-svc-4nqfw
Feb 26 11:45:20.903: INFO: Got endpoints: latency-svc-4nqfw [1.495849507s]
Feb 26 11:45:21.164: INFO: Created: latency-svc-9b8mg
Feb 26 11:45:21.164: INFO: Got endpoints: latency-svc-9b8mg [1.697270276s]
Feb 26 11:45:21.387: INFO: Created: latency-svc-vxtcd
Feb 26 11:45:21.442: INFO: Got endpoints: latency-svc-vxtcd [1.840244777s]
Feb 26 11:45:21.451: INFO: Created: latency-svc-q76r4
Feb 26 11:45:21.532: INFO: Got endpoints: latency-svc-q76r4 [1.873407329s]
Feb 26 11:45:21.537: INFO: Created: latency-svc-9r87h
Feb 26 11:45:21.559: INFO: Got endpoints: latency-svc-9r87h [1.769091776s]
Feb 26 11:45:21.598: INFO: Created: latency-svc-p4n4f
Feb 26 11:45:21.619: INFO: Got endpoints: latency-svc-p4n4f [1.795919123s]
Feb 26 11:45:21.732: INFO: Created: latency-svc-wtmdx
Feb 26 11:45:21.745: INFO: Got endpoints: latency-svc-wtmdx [1.790374481s]
Feb 26 11:45:21.923: INFO: Created: latency-svc-7594c
Feb 26 11:45:21.943: INFO: Got endpoints: latency-svc-7594c [1.830957817s]
Feb 26 11:45:21.986: INFO: Created: latency-svc-gnxbg
Feb 26 11:45:21.995: INFO: Got endpoints: latency-svc-gnxbg [1.830958849s]
Feb 26 11:45:22.123: INFO: Created: latency-svc-4t82c
Feb 26 11:45:22.153: INFO: Got endpoints: latency-svc-4t82c [1.906701716s]
Feb 26 11:45:22.295: INFO: Created: latency-svc-t7w58
Feb 26 11:45:22.304: INFO: Got endpoints: latency-svc-t7w58 [1.887710642s]
Feb 26 11:45:22.347: INFO: Created: latency-svc-phqrx
Feb 26 11:45:22.578: INFO: Got endpoints: latency-svc-phqrx [2.078595911s]
Feb 26 11:45:22.615: INFO: Created: latency-svc-sg8cr
Feb 26 11:45:22.899: INFO: Got endpoints: latency-svc-sg8cr [2.238940628s]
Feb 26 11:45:23.038: INFO: Created: latency-svc-sxhj2
Feb 26 11:45:23.305: INFO: Got endpoints: latency-svc-sxhj2 [2.587124179s]
Feb 26 11:45:23.559: INFO: Created: latency-svc-2h5h5
Feb 26 11:45:23.904: INFO: Got endpoints: latency-svc-2h5h5 [1.004307225s]
Feb 26 11:45:23.938: INFO: Created: latency-svc-gkcnt
Feb 26 11:45:23.949: INFO: Got endpoints: latency-svc-gkcnt [3.113145063s]
Feb 26 11:45:24.133: INFO: Created: latency-svc-b7tt7
Feb 26 11:45:24.139: INFO: Got endpoints: latency-svc-b7tt7 [3.235857849s]
Feb 26 11:45:24.199: INFO: Created: latency-svc-psdbt
Feb 26 11:45:24.289: INFO: Got endpoints: latency-svc-psdbt [3.124275925s]
Feb 26 11:45:24.323: INFO: Created: latency-svc-x64cc
Feb 26 11:45:24.371: INFO: Got endpoints: latency-svc-x64cc [2.928108434s]
Feb 26 11:45:24.620: INFO: Created: latency-svc-j2bgp
Feb 26 11:45:24.796: INFO: Got endpoints: latency-svc-j2bgp [3.263674045s]
Feb 26 11:45:24.824: INFO: Created: latency-svc-8pds6
Feb 26 11:45:24.845: INFO: Got endpoints: latency-svc-8pds6 [3.285414563s]
Feb 26 11:45:24.892: INFO: Created: latency-svc-rmzmj
Feb 26 11:45:25.072: INFO: Got endpoints: latency-svc-rmzmj [3.453038724s]
Feb 26 11:45:25.115: INFO: Created: latency-svc-79pr5
Feb 26 11:45:25.129: INFO: Got endpoints: latency-svc-79pr5 [3.383700814s]
Feb 26 11:45:25.259: INFO: Created: latency-svc-vhlzg
Feb 26 11:45:25.276: INFO: Got endpoints: latency-svc-vhlzg [3.332884473s]
Feb 26 11:45:25.327: INFO: Created: latency-svc-qtxnz
Feb 26 11:45:27.144: INFO: Got endpoints: latency-svc-qtxnz [5.14924678s]
Feb 26 11:45:27.294: INFO: Created: latency-svc-db2hx
Feb 26 11:45:27.452: INFO: Got endpoints: latency-svc-db2hx [5.29879788s]
Feb 26 11:45:27.599: INFO: Created: latency-svc-m4292
Feb 26 11:45:27.612: INFO: Got endpoints: latency-svc-m4292 [5.308300335s]
Feb 26 11:45:27.653: INFO: Created: latency-svc-whltz
Feb 26 11:45:27.676: INFO: Got endpoints: latency-svc-whltz [5.09704826s]
Feb 26 11:45:27.817: INFO: Created: latency-svc-8jjdd
Feb 26 11:45:27.880: INFO: Created: latency-svc-dzgl4
Feb 26 11:45:27.969: INFO: Got endpoints: latency-svc-8jjdd [4.663724679s]
Feb 26 11:45:27.969: INFO: Got endpoints: latency-svc-dzgl4 [4.064557213s]
Feb 26 11:45:27.982: INFO: Created: latency-svc-kfkng
Feb 26 11:45:28.007: INFO: Got endpoints: latency-svc-kfkng [4.057902071s]
Feb 26 11:45:28.177: INFO: Created: latency-svc-sndxc
Feb 26 11:45:28.200: INFO: Got endpoints: latency-svc-sndxc [4.060497693s]
Feb 26 11:45:28.345: INFO: Created: latency-svc-bbfc5
Feb 26 11:45:28.355: INFO: Got endpoints: latency-svc-bbfc5 [4.065215597s]
Feb 26 11:45:28.404: INFO: Created: latency-svc-t4mc7
Feb 26 11:45:28.579: INFO: Got endpoints: latency-svc-t4mc7 [4.208245559s]
Feb 26 11:45:28.611: INFO: Created: latency-svc-8v9vf
Feb 26 11:45:28.615: INFO: Got endpoints: latency-svc-8v9vf [3.818629446s]
Feb 26 11:45:28.720: INFO: Created: latency-svc-j6xv9
Feb 26 11:45:28.740: INFO: Got endpoints: latency-svc-j6xv9 [3.894868706s]
Feb 26 11:45:28.796: INFO: Created: latency-svc-j8kjt
Feb 26 11:45:28.928: INFO: Got endpoints: latency-svc-j8kjt [3.855131819s]
Feb 26 11:45:28.951: INFO: Created: latency-svc-2m8nt
Feb 26 11:45:29.169: INFO: Got endpoints: latency-svc-2m8nt [4.040442304s]
Feb 26 11:45:29.199: INFO: Created: latency-svc-hjcxr
Feb 26 11:45:29.201: INFO: Got endpoints: latency-svc-hjcxr [3.924901156s]
Feb 26 11:45:29.254: INFO: Created: latency-svc-rzm6g
Feb 26 11:45:29.384: INFO: Got endpoints: latency-svc-rzm6g [2.240076407s]
Feb 26 11:45:29.416: INFO: Created: latency-svc-wdpxj
Feb 26 11:45:29.425: INFO: Got endpoints: latency-svc-wdpxj [1.971835846s]
Feb 26 11:45:29.569: INFO: Created: latency-svc-vmmck
Feb 26 11:45:29.598: INFO: Got endpoints: latency-svc-vmmck [1.985055652s]
Feb 26 11:45:29.630: INFO: Created: latency-svc-t5hmw
Feb 26 11:45:29.775: INFO: Got endpoints: latency-svc-t5hmw [2.099495329s]
Feb 26 11:45:29.818: INFO: Created: latency-svc-lvtt7
Feb 26 11:45:29.838: INFO: Got endpoints: latency-svc-lvtt7 [1.86900203s]
Feb 26 11:45:29.942: INFO: Created: latency-svc-cgksf
Feb 26 11:45:29.954: INFO: Got endpoints: latency-svc-cgksf [1.984308122s]
Feb 26 11:45:30.116: INFO: Created: latency-svc-gg6n4
Feb 26 11:45:30.136: INFO: Got endpoints: latency-svc-gg6n4 [2.128643342s]
Feb 26 11:45:30.195: INFO: Created: latency-svc-8x46f
Feb 26 11:45:30.282: INFO: Got endpoints: latency-svc-8x46f [2.081642195s]
Feb 26 11:45:30.314: INFO: Created: latency-svc-l5z9s
Feb 26 11:45:30.318: INFO: Got endpoints: latency-svc-l5z9s [1.963410061s]
Feb 26 11:45:30.383: INFO: Created: latency-svc-22lfp
Feb 26 11:45:30.534: INFO: Got endpoints: latency-svc-22lfp [1.954359627s]
Feb 26 11:45:30.553: INFO: Created: latency-svc-tnwj2
Feb 26 11:45:30.594: INFO: Got endpoints: latency-svc-tnwj2 [1.978113026s]
Feb 26 11:45:30.768: INFO: Created: latency-svc-d4p4w
Feb 26 11:45:30.927: INFO: Got endpoints: latency-svc-d4p4w [2.187184193s]
Feb 26 11:45:30.934: INFO: Created: latency-svc-j7jgd
Feb 26 11:45:30.966: INFO: Got endpoints: latency-svc-j7jgd [2.037670555s]
Feb 26 11:45:31.210: INFO: Created: latency-svc-vxp4n
Feb 26 11:45:31.214: INFO: Got endpoints: latency-svc-vxp4n [2.045081356s]
Feb 26 11:45:31.397: INFO: Created: latency-svc-z8bpm
Feb 26 11:45:31.421: INFO: Got endpoints: latency-svc-z8bpm [2.220466047s]
Feb 26 11:45:31.464: INFO: Created: latency-svc-sj25f
Feb 26 11:45:31.625: INFO: Got endpoints: latency-svc-sj25f [2.239955514s]
Feb 26 11:45:31.659: INFO: Created: latency-svc-qmj8m
Feb 26 11:45:31.681: INFO: Got endpoints: latency-svc-qmj8m [2.256101396s]
Feb 26 11:45:31.860: INFO: Created: latency-svc-wzkdj
Feb 26 11:45:31.899: INFO: Got endpoints: latency-svc-wzkdj [2.301137845s]
Feb 26 11:45:32.095: INFO: Created: latency-svc-nzx4x
Feb 26 11:45:32.118: INFO: Got endpoints: latency-svc-nzx4x [2.341589193s]
Feb 26 11:45:32.179: INFO: Created: latency-svc-2t247
Feb 26 11:45:32.260: INFO: Got endpoints: latency-svc-2t247 [2.421189368s]
Feb 26 11:45:32.276: INFO: Created: latency-svc-9pv8l
Feb 26 11:45:32.291: INFO: Got endpoints: latency-svc-9pv8l [2.337677633s]
Feb 26 11:45:32.340: INFO: Created: latency-svc-g79ng
Feb 26 11:45:32.354: INFO: Got endpoints: latency-svc-g79ng [2.21777432s]
Feb 26 11:45:32.488: INFO: Created: latency-svc-fqqnb
Feb 26 11:45:32.496: INFO: Got endpoints: latency-svc-fqqnb [2.213335155s]
Feb 26 11:45:32.689: INFO: Created: latency-svc-xsmnw
Feb 26 11:45:32.702: INFO: Got endpoints: latency-svc-xsmnw [2.38329371s]
Feb 26 11:45:32.745: INFO: Created: latency-svc-hgxwl
Feb 26 11:45:32.762: INFO: Got endpoints: latency-svc-hgxwl [2.227308615s]
Feb 26 11:45:32.872: INFO: Created: latency-svc-6sq7f
Feb 26 11:45:32.895: INFO: Got endpoints: latency-svc-6sq7f [2.301217863s]
Feb 26 11:45:33.218: INFO: Created: latency-svc-w62zs
Feb 26 11:45:33.234: INFO: Got endpoints: latency-svc-w62zs [2.305916694s]
Feb 26 11:45:33.302: INFO: Created: latency-svc-zpknk
Feb 26 11:45:33.508: INFO: Got endpoints: latency-svc-zpknk [2.541852436s]
Feb 26 11:45:33.531: INFO: Created: latency-svc-sq7j5
Feb 26 11:45:33.596: INFO: Got endpoints: latency-svc-sq7j5 [2.38132941s]
Feb 26 11:45:33.765: INFO: Created: latency-svc-vggkr
Feb 26 11:45:33.795: INFO: Got endpoints: latency-svc-vggkr [2.373749782s]
Feb 26 11:45:34.068: INFO: Created: latency-svc-cvxps
Feb 26 11:45:34.105: INFO: Got endpoints: latency-svc-cvxps [2.479330406s]
Feb 26 11:45:34.285: INFO: Created: latency-svc-9hn2w
Feb 26 11:45:34.305: INFO: Got endpoints: latency-svc-9hn2w [2.623505797s]
Feb 26 11:45:34.636: INFO: Created: latency-svc-c8qbg
Feb 26 11:45:34.755: INFO: Got endpoints: latency-svc-c8qbg [2.855139613s]
Feb 26 11:45:34.769: INFO: Created: latency-svc-bmwzk
Feb 26 11:45:34.789: INFO: Got endpoints: latency-svc-bmwzk [2.670825117s]
Feb 26 11:45:34.907: INFO: Created: latency-svc-gw97r
Feb 26 11:45:34.918: INFO: Got endpoints: latency-svc-gw97r [2.657713095s]
Feb 26 11:45:35.667: INFO: Created: latency-svc-b2h9n
Feb 26 11:45:35.684: INFO: Got endpoints: latency-svc-b2h9n [3.392059268s]
Feb 26 11:45:36.497: INFO: Created: latency-svc-dfbc2
Feb 26 11:45:36.635: INFO: Got endpoints: latency-svc-dfbc2 [4.280737459s]
Feb 26 11:45:36.704: INFO: Created: latency-svc-vnhc5
Feb 26 11:45:36.817: INFO: Got endpoints: latency-svc-vnhc5 [4.321301175s]
Feb 26 11:45:36.892: INFO: Created: latency-svc-pmfgv
Feb 26 11:45:36.903: INFO: Got endpoints: latency-svc-pmfgv [4.200592646s]
Feb 26 11:45:37.077: INFO: Created: latency-svc-g6hvc
Feb 26 11:45:37.124: INFO: Got endpoints: latency-svc-g6hvc [4.361263784s]
Feb 26 11:45:37.269: INFO: Created: latency-svc-5wn8p
Feb 26 11:45:37.298: INFO: Got endpoints: latency-svc-5wn8p [4.40230932s]
Feb 26 11:45:37.358: INFO: Created: latency-svc-pm4c2
Feb 26 11:45:37.437: INFO: Got endpoints: latency-svc-pm4c2 [4.203290565s]
Feb 26 11:45:37.471: INFO: Created: latency-svc-rvft4
Feb 26 11:45:37.640: INFO: Got endpoints: latency-svc-rvft4 [4.131488906s]
Feb 26 11:45:37.652: INFO: Created: latency-svc-clb5w
Feb 26 11:45:37.676: INFO: Got endpoints: latency-svc-clb5w [4.079944095s]
Feb 26 11:45:37.731: INFO: Created: latency-svc-mwfrj
Feb 26 11:45:37.847: INFO: Got endpoints: latency-svc-mwfrj [4.051660032s]
Feb 26 11:45:37.868: INFO: Created: latency-svc-q99ld
Feb 26 11:45:37.905: INFO: Got endpoints: latency-svc-q99ld [3.800289446s]
Feb 26 11:45:38.124: INFO: Created: latency-svc-skhr8
Feb 26 11:45:38.175: INFO: Created: latency-svc-2zsct
Feb 26 11:45:38.202: INFO: Got endpoints: latency-svc-skhr8 [3.896249979s]
Feb 26 11:45:38.320: INFO: Created: latency-svc-dr4x7
Feb 26 11:45:38.328: INFO: Got endpoints: latency-svc-2zsct [3.572564761s]
Feb 26 11:45:38.342: INFO: Got endpoints: latency-svc-dr4x7 [3.553098354s]
Feb 26 11:45:38.519: INFO: Created: latency-svc-r46rv
Feb 26 11:45:38.569: INFO: Got endpoints: latency-svc-r46rv [3.650429387s]
Feb 26 11:45:38.756: INFO: Created: latency-svc-7pzh4
Feb 26 11:45:38.788: INFO: Got endpoints: latency-svc-7pzh4 [3.103388623s]
Feb 26 11:45:38.812: INFO: Created: latency-svc-cd4hf
Feb 26 11:45:38.947: INFO: Got endpoints: latency-svc-cd4hf [2.311245891s]
Feb 26 11:45:38.976: INFO: Created: latency-svc-pd29j
Feb 26 11:45:39.181: INFO: Got endpoints: latency-svc-pd29j [2.362674726s]
Feb 26 11:45:39.188: INFO: Created: latency-svc-7bb9n
Feb 26 11:45:39.210: INFO: Got endpoints: latency-svc-7bb9n [2.306716551s]
Feb 26 11:45:39.283: INFO: Created: latency-svc-ms5ck
Feb 26 11:45:39.372: INFO: Got endpoints: latency-svc-ms5ck [2.247255616s]
Feb 26 11:45:39.439: INFO: Created: latency-svc-cvmqt
Feb 26 11:45:39.467: INFO: Got endpoints: latency-svc-cvmqt [2.167749756s]
Feb 26 11:45:39.611: INFO: Created: latency-svc-c6kj9
Feb 26 11:45:39.626: INFO: Got endpoints: latency-svc-c6kj9 [2.188235324s]
Feb 26 11:45:39.676: INFO: Created: latency-svc-fcw2p
Feb 26 11:45:39.804: INFO: Created: latency-svc-tjq9j
Feb 26 11:45:39.811: INFO: Got endpoints: latency-svc-fcw2p [2.170892922s]
Feb 26 11:45:39.961: INFO: Got endpoints: latency-svc-tjq9j [2.284347073s]
Feb 26 11:45:39.968: INFO: Created: latency-svc-m24vl
Feb 26 11:45:39.993: INFO: Got endpoints: latency-svc-m24vl [2.144918112s]
Feb 26 11:45:40.042: INFO: Created: latency-svc-dlljp
Feb 26 11:45:40.177: INFO: Got endpoints: latency-svc-dlljp [2.271455933s]
Feb 26 11:45:40.212: INFO: Created: latency-svc-mt94l
Feb 26 11:45:40.240: INFO: Got endpoints: latency-svc-mt94l [2.03855934s]
Feb 26 11:45:40.268: INFO: Created: latency-svc-7nbqt
Feb 26 11:45:40.370: INFO: Got endpoints: latency-svc-7nbqt [2.041899432s]
Feb 26 11:45:40.394: INFO: Created: latency-svc-5zl49
Feb 26 11:45:40.420: INFO: Got endpoints: latency-svc-5zl49 [2.07745598s]
Feb 26 11:45:40.448: INFO: Created: latency-svc-4xdns
Feb 26 11:45:40.568: INFO: Got endpoints: latency-svc-4xdns [1.99936493s]
Feb 26 11:45:40.610: INFO: Created: latency-svc-cjtpw
Feb 26 11:45:40.655: INFO: Got endpoints: latency-svc-cjtpw [1.867250182s]
Feb 26 11:45:40.737: INFO: Created: latency-svc-bx9vr
Feb 26 11:45:40.755: INFO: Got endpoints: latency-svc-bx9vr [1.808168896s]
Feb 26 11:45:40.799: INFO: Created: latency-svc-zsp4s
Feb 26 11:45:40.900: INFO: Got endpoints: latency-svc-zsp4s [1.717862215s]
Feb 26 11:45:40.959: INFO: Created: latency-svc-plvt7
Feb 26 11:45:40.980: INFO: Got endpoints: latency-svc-plvt7 [1.770748786s]
Feb 26 11:45:41.211: INFO: Created: latency-svc-667mh
Feb 26 11:45:41.240: INFO: Got endpoints: latency-svc-667mh [1.867334493s]
Feb 26 11:45:41.471: INFO: Created: latency-svc-t4www
Feb 26 11:45:41.494: INFO: Got endpoints: latency-svc-t4www [2.026944204s]
Feb 26 11:45:41.628: INFO: Created: latency-svc-q9w5n
Feb 26 11:45:41.658: INFO: Got endpoints: latency-svc-q9w5n [2.031584193s]
Feb 26 11:45:41.722: INFO: Created: latency-svc-8bbvz
Feb 26 11:45:41.784: INFO: Got endpoints: latency-svc-8bbvz [1.97249517s]
Feb 26 11:45:41.784: INFO: Latencies: [166.765025ms 293.201696ms 425.363552ms 461.343257ms 725.332507ms 870.864675ms 1.004307225s 1.052893654s 1.082389879s 1.29156926s 1.330027154s 1.495849507s 1.53170038s 1.57091767s 1.590927154s 1.640300871s 1.697270276s 1.717862215s 1.769091776s 1.770748786s 1.78587585s 1.790374481s 1.795919123s 1.808168896s 1.823631203s 1.830957817s 1.830958849s 1.840244777s 1.867250182s 1.867334493s 1.86900203s 1.873407329s 1.87616012s 1.887710642s 1.906701716s 1.911120972s 1.927904863s 1.930872229s 1.954359627s 1.963410061s 1.968546473s 1.971835846s 1.97249517s 1.972529265s 1.974570063s 1.978113026s 1.984308122s 1.985055652s 1.994610817s 1.99936493s 2.026944204s 2.031584193s 2.036648169s 2.03731491s 2.037670555s 2.03855934s 2.041899432s 2.045081356s 2.073688569s 2.076488088s 2.07745598s 2.078595911s 2.081642195s 2.099495329s 2.123748936s 2.128643342s 2.144918112s 2.159100623s 2.167749756s 2.170892922s 2.187184193s 2.188235324s 2.188817877s 2.19339072s 2.197575847s 2.208360887s 2.210736519s 2.213335155s 2.21777432s 2.219091857s 2.220466047s 2.227308615s 2.231172265s 2.236279553s 2.238940628s 2.239955514s 2.240076407s 2.247255616s 2.256101396s 2.265589277s 2.271455933s 2.275869301s 2.284347073s 2.287654355s 2.290335622s 2.301137845s 2.301217863s 2.305916694s 2.306716551s 2.311245891s 2.313172136s 2.31893729s 2.328635127s 2.328636364s 2.333974098s 2.336380928s 2.337677633s 2.341589193s 2.352453846s 2.356027082s 2.358354163s 2.362406475s 2.362674726s 2.368481489s 2.373267013s 2.373749782s 2.38132941s 2.382150284s 2.38329371s 2.396377047s 2.399433097s 2.414387626s 2.415331159s 2.421189368s 2.421536656s 2.424274477s 2.427981215s 2.428979563s 2.435047735s 2.440373608s 2.442496587s 2.456630082s 2.459119612s 2.460025956s 2.472211018s 2.479330406s 2.485144548s 2.490170718s 2.491657963s 2.507150556s 2.517231412s 2.529556695s 2.531136232s 2.541852436s 2.568291438s 2.572990185s 2.587124179s 2.596717395s 2.615692803s 2.623505797s 2.650605937s 2.657713095s 2.670825117s 2.676688495s 2.752191703s 2.768270973s 2.77069491s 2.855139613s 2.905120598s 2.928108434s 2.936786487s 3.103388623s 3.113145063s 3.124275925s 3.235857849s 3.263674045s 3.285414563s 3.332884473s 3.383700814s 3.392059268s 3.453038724s 3.553098354s 3.572564761s 3.650429387s 3.800289446s 3.818629446s 3.855131819s 3.894868706s 3.896249979s 3.924901156s 4.040442304s 4.051660032s 4.057902071s 4.060497693s 4.064557213s 4.065215597s 4.079944095s 4.131488906s 4.200592646s 4.203290565s 4.208245559s 4.280737459s 4.321301175s 4.361263784s 4.40230932s 4.663724679s 5.09704826s 5.14924678s 5.29879788s 5.308300335s]
Feb 26 11:45:41.785: INFO: 50 %ile: 2.313172136s
Feb 26 11:45:41.785: INFO: 90 %ile: 4.040442304s
Feb 26 11:45:41.785: INFO: 99 %ile: 5.29879788s
Feb 26 11:45:41.785: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:45:41.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-t4j4r" for this suite.
Feb 26 11:46:53.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:46:54.014: INFO: namespace: e2e-tests-svc-latency-t4j4r, resource: bindings, ignored listing per whitelist
Feb 26 11:46:54.045: INFO: namespace e2e-tests-svc-latency-t4j4r deletion completed in 1m12.244937771s

• [SLOW TEST:116.970 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:46:54.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0226 11:47:04.608091       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 26 11:47:04.608: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:47:04.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-rgzrf" for this suite.
Feb 26 11:47:10.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:47:10.811: INFO: namespace: e2e-tests-gc-rgzrf, resource: bindings, ignored listing per whitelist
Feb 26 11:47:10.818: INFO: namespace e2e-tests-gc-rgzrf deletion completed in 6.172650149s

• [SLOW TEST:16.772 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:47:10.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 26 11:47:11.046: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b9473986-588d-11ea-8134-0242ac110008" in namespace "e2e-tests-downward-api-zvfdj" to be "success or failure"
Feb 26 11:47:11.060: INFO: Pod "downwardapi-volume-b9473986-588d-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.428149ms
Feb 26 11:47:13.073: INFO: Pod "downwardapi-volume-b9473986-588d-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026304305s
Feb 26 11:47:15.123: INFO: Pod "downwardapi-volume-b9473986-588d-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076803668s
Feb 26 11:47:17.357: INFO: Pod "downwardapi-volume-b9473986-588d-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.309951278s
Feb 26 11:47:19.380: INFO: Pod "downwardapi-volume-b9473986-588d-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.333331424s
Feb 26 11:47:21.400: INFO: Pod "downwardapi-volume-b9473986-588d-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.352941529s
STEP: Saw pod success
Feb 26 11:47:21.400: INFO: Pod "downwardapi-volume-b9473986-588d-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 11:47:21.410: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b9473986-588d-11ea-8134-0242ac110008 container client-container: 
STEP: delete the pod
Feb 26 11:47:21.709: INFO: Waiting for pod downwardapi-volume-b9473986-588d-11ea-8134-0242ac110008 to disappear
Feb 26 11:47:21.729: INFO: Pod downwardapi-volume-b9473986-588d-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:47:21.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zvfdj" for this suite.
Feb 26 11:47:27.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:47:27.907: INFO: namespace: e2e-tests-downward-api-zvfdj, resource: bindings, ignored listing per whitelist
Feb 26 11:47:27.939: INFO: namespace e2e-tests-downward-api-zvfdj deletion completed in 6.197944426s

• [SLOW TEST:17.121 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:47:27.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Feb 26 11:47:28.149: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix530975475/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:47:28.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bx925" for this suite.
Feb 26 11:47:34.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:47:34.344: INFO: namespace: e2e-tests-kubectl-bx925, resource: bindings, ignored listing per whitelist
Feb 26 11:47:34.662: INFO: namespace e2e-tests-kubectl-bx925 deletion completed in 6.406192088s

• [SLOW TEST:6.723 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:47:34.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb 26 11:47:45.755: INFO: Successfully updated pod "labelsupdatec7a68dfd-588d-11ea-8134-0242ac110008"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:47:47.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-8jg6n" for this suite.
Feb 26 11:48:11.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:48:12.052: INFO: namespace: e2e-tests-downward-api-8jg6n, resource: bindings, ignored listing per whitelist
Feb 26 11:48:12.093: INFO: namespace e2e-tests-downward-api-8jg6n deletion completed in 24.246304447s

• [SLOW TEST:37.430 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:48:12.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 26 11:48:12.246: INFO: Creating deployment "nginx-deployment"
Feb 26 11:48:12.306: INFO: Waiting for observed generation 1
Feb 26 11:48:14.782: INFO: Waiting for all required pods to come up
Feb 26 11:48:14.795: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 26 11:48:53.500: INFO: Waiting for deployment "nginx-deployment" to complete
Feb 26 11:48:53.514: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb 26 11:48:53.531: INFO: Updating deployment nginx-deployment
Feb 26 11:48:53.531: INFO: Waiting for observed generation 2
Feb 26 11:48:55.558: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 26 11:48:55.566: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 26 11:48:55.570: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 26 11:48:55.582: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 26 11:48:55.582: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 26 11:48:55.584: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 26 11:48:55.589: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb 26 11:48:55.589: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb 26 11:48:55.602: INFO: Updating deployment nginx-deployment
Feb 26 11:48:55.602: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb 26 11:48:58.681: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 26 11:48:58.883: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 26 11:49:02.053: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hbvgq/deployments/nginx-deployment,UID:ddc44874-588d-11ea-a994-fa163e34d433,ResourceVersion:22975814,Generation:3,CreationTimestamp:2020-02-26 11:48:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-26 11:48:54 +0000 UTC 2020-02-26 11:48:12 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-02-26 11:48:57 +0000 UTC 2020-02-26 11:48:57 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb 26 11:49:04.128: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hbvgq/replicasets/nginx-deployment-5c98f8fb5,UID:f661131a-588d-11ea-a994-fa163e34d433,ResourceVersion:22975820,Generation:3,CreationTimestamp:2020-02-26 11:48:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ddc44874-588d-11ea-a994-fa163e34d433 0xc0021cd637 0xc0021cd638}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 26 11:49:04.128: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb 26 11:49:04.128: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hbvgq/replicasets/nginx-deployment-85ddf47c5d,UID:ddd0ab10-588d-11ea-a994-fa163e34d433,ResourceVersion:22975796,Generation:3,CreationTimestamp:2020-02-26 11:48:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ddc44874-588d-11ea-a994-fa163e34d433 0xc0021cd6f7 0xc0021cd6f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb 26 11:49:04.789: INFO: Pod "nginx-deployment-5c98f8fb5-27rr9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-27rr9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-5c98f8fb5-27rr9,UID:f9936bc1-588d-11ea-a994-fa163e34d433,ResourceVersion:22975770,Generation:0,CreationTimestamp:2020-02-26 11:48:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f661131a-588d-11ea-a994-fa163e34d433 0xc002189967 0xc002189968}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021899d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021899f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.789: INFO: Pod "nginx-deployment-5c98f8fb5-46459" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-46459,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-5c98f8fb5-46459,UID:faa2d790-588d-11ea-a994-fa163e34d433,ResourceVersion:22975812,Generation:0,CreationTimestamp:2020-02-26 11:49:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f661131a-588d-11ea-a994-fa163e34d433 0xc002189a67 0xc002189a68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002189d80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002189da0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:00 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.789: INFO: Pod "nginx-deployment-5c98f8fb5-54c44" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-54c44,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-5c98f8fb5-54c44,UID:f9aa2dbd-588d-11ea-a994-fa163e34d433,ResourceVersion:22975785,Generation:0,CreationTimestamp:2020-02-26 11:48:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f661131a-588d-11ea-a994-fa163e34d433 0xc002189e17 0xc002189e18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002189e80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002189ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:00 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.789: INFO: Pod "nginx-deployment-5c98f8fb5-54swm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-54swm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-5c98f8fb5-54swm,UID:fa7a648b-588d-11ea-a994-fa163e34d433,ResourceVersion:22975805,Generation:0,CreationTimestamp:2020-02-26 11:49:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f661131a-588d-11ea-a994-fa163e34d433 0xc001d90097 0xc001d90098}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d90180} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d901a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:00 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.789: INFO: Pod "nginx-deployment-5c98f8fb5-7z88t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7z88t,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-5c98f8fb5-7z88t,UID:f6783f93-588d-11ea-a994-fa163e34d433,ResourceVersion:22975752,Generation:0,CreationTimestamp:2020-02-26 11:48:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f661131a-588d-11ea-a994-fa163e34d433 0xc001d902f7 0xc001d902f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d90420} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d90570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-26 11:48:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.790: INFO: Pod "nginx-deployment-5c98f8fb5-9p2vq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9p2vq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-5c98f8fb5-9p2vq,UID:f6784a02-588d-11ea-a994-fa163e34d433,ResourceVersion:22975749,Generation:0,CreationTimestamp:2020-02-26 11:48:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f661131a-588d-11ea-a994-fa163e34d433 0xc001d90637 0xc001d90638}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d90820} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d90840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-26 11:48:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.790: INFO: Pod "nginx-deployment-5c98f8fb5-9vg87" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9vg87,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-5c98f8fb5-9vg87,UID:f6ca2534-588d-11ea-a994-fa163e34d433,ResourceVersion:22975778,Generation:0,CreationTimestamp:2020-02-26 11:48:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f661131a-588d-11ea-a994-fa163e34d433 0xc001d90907 0xc001d90908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d90ab0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d90ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:54 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-26 11:48:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.791: INFO: Pod "nginx-deployment-5c98f8fb5-fkvsf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fkvsf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-5c98f8fb5-fkvsf,UID:fa7bec5f-588d-11ea-a994-fa163e34d433,ResourceVersion:22975806,Generation:0,CreationTimestamp:2020-02-26 11:49:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f661131a-588d-11ea-a994-fa163e34d433 0xc001d90b97 0xc001d90b98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d90c00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d90c20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:00 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.791: INFO: Pod "nginx-deployment-5c98f8fb5-jq56m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jq56m,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-5c98f8fb5-jq56m,UID:f66f7a75-588d-11ea-a994-fa163e34d433,ResourceVersion:22975748,Generation:0,CreationTimestamp:2020-02-26 11:48:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f661131a-588d-11ea-a994-fa163e34d433 0xc001d90c97 0xc001d90c98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d90d00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d90d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-26 11:48:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.791: INFO: Pod "nginx-deployment-5c98f8fb5-qn2tc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qn2tc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-5c98f8fb5-qn2tc,UID:fa7bd969-588d-11ea-a994-fa163e34d433,ResourceVersion:22975804,Generation:0,CreationTimestamp:2020-02-26 11:49:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f661131a-588d-11ea-a994-fa163e34d433 0xc001d90de7 0xc001d90de8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d90e50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d90e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:00 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.792: INFO: Pod "nginx-deployment-5c98f8fb5-v6nqp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-v6nqp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-5c98f8fb5-v6nqp,UID:f6bb626c-588d-11ea-a994-fa163e34d433,ResourceVersion:22975760,Generation:0,CreationTimestamp:2020-02-26 11:48:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f661131a-588d-11ea-a994-fa163e34d433 0xc001d90ee7 0xc001d90ee8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d90f50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d90f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:54 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-26 11:48:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.792: INFO: Pod "nginx-deployment-5c98f8fb5-wqz4x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wqz4x,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-5c98f8fb5-wqz4x,UID:fa7a9190-588d-11ea-a994-fa163e34d433,ResourceVersion:22975799,Generation:0,CreationTimestamp:2020-02-26 11:49:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f661131a-588d-11ea-a994-fa163e34d433 0xc001d91037 0xc001d91038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d910a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d910c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:00 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.792: INFO: Pod "nginx-deployment-5c98f8fb5-ztk7f" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ztk7f,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-5c98f8fb5-ztk7f,UID:f9a9f14e-588d-11ea-a994-fa163e34d433,ResourceVersion:22975781,Generation:0,CreationTimestamp:2020-02-26 11:48:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f661131a-588d-11ea-a994-fa163e34d433 0xc001d91137 0xc001d91138}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d911a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d911c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:00 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.792: INFO: Pod "nginx-deployment-85ddf47c5d-87ns5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-87ns5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-87ns5,UID:fa7c6764-588d-11ea-a994-fa163e34d433,ResourceVersion:22975808,Generation:0,CreationTimestamp:2020-02-26 11:49:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc001d91237 0xc001d91238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d912a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d912c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:00 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.792: INFO: Pod "nginx-deployment-85ddf47c5d-9jvq8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9jvq8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-9jvq8,UID:ddf78bd6-588d-11ea-a994-fa163e34d433,ResourceVersion:22975694,Generation:0,CreationTimestamp:2020-02-26 11:48:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc001d91337 0xc001d91338}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d913a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d913c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-02-26 11:48:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-26 11:48:46 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://42c528555ecf227ff71399d8e9c3b1e5b4f3c3b56b568f063a3bb1c92746e3f8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.792: INFO: Pod "nginx-deployment-85ddf47c5d-9v4kc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9v4kc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-9v4kc,UID:f9aa6fa8-588d-11ea-a994-fa163e34d433,ResourceVersion:22975809,Generation:0,CreationTimestamp:2020-02-26 11:48:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc001d91487 0xc001d91488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d914f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d91510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:00 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.793: INFO: Pod "nginx-deployment-85ddf47c5d-bsktz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bsktz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-bsktz,UID:ddf7ccb0-588d-11ea-a994-fa163e34d433,ResourceVersion:22975667,Generation:0,CreationTimestamp:2020-02-26 11:48:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc001d91587 0xc001d91588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d915f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d91610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-26 11:48:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-26 11:48:42 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://468617ab9ab56bfc454587629b2c750853b8f2c9008524d34af9ee180882388a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.793: INFO: Pod "nginx-deployment-85ddf47c5d-dsnb6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dsnb6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-dsnb6,UID:dde6878a-588d-11ea-a994-fa163e34d433,ResourceVersion:22975690,Generation:0,CreationTimestamp:2020-02-26 11:48:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc001d916d7 0xc001d916d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d91740} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d91760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-02-26 11:48:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-26 11:48:45 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5c7af4b870763abfb1a59a372b1f35b8c0dfecef20539a5b54434222d1f4d310}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.793: INFO: Pod "nginx-deployment-85ddf47c5d-fjnzj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fjnzj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-fjnzj,UID:fa7c4e63-588d-11ea-a994-fa163e34d433,ResourceVersion:22975803,Generation:0,CreationTimestamp:2020-02-26 11:49:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc001d918a7 0xc001d918a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d91910} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d91930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:00 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.794: INFO: Pod "nginx-deployment-85ddf47c5d-g44bd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-g44bd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-g44bd,UID:dde6a97a-588d-11ea-a994-fa163e34d433,ResourceVersion:22975681,Generation:0,CreationTimestamp:2020-02-26 11:48:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc001d919a7 0xc001d919a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d91a20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d91a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-02-26 11:48:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-26 11:48:46 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a46b91cd529c1b55163452008768fb11e4c924a778b90d82dd9b87f073398d31}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.794: INFO: Pod "nginx-deployment-85ddf47c5d-jk2dz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jk2dz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-jk2dz,UID:fa7c194c-588d-11ea-a994-fa163e34d433,ResourceVersion:22975801,Generation:0,CreationTimestamp:2020-02-26 11:49:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc001d91b07 0xc001d91b08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d91b70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d91b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:00 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.794: INFO: Pod "nginx-deployment-85ddf47c5d-k2w4v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-k2w4v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-k2w4v,UID:fa7c4789-588d-11ea-a994-fa163e34d433,ResourceVersion:22975807,Generation:0,CreationTimestamp:2020-02-26 11:49:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc001d91c07 0xc001d91c08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d91c70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d91c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:00 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.794: INFO: Pod "nginx-deployment-85ddf47c5d-kxcm7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kxcm7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-kxcm7,UID:fa7b2bf4-588d-11ea-a994-fa163e34d433,ResourceVersion:22975802,Generation:0,CreationTimestamp:2020-02-26 11:49:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc001d91d07 0xc001d91d08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d91d70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d91d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:00 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.794: INFO: Pod "nginx-deployment-85ddf47c5d-m7n54" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-m7n54,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-m7n54,UID:dddbea52-588d-11ea-a994-fa163e34d433,ResourceVersion:22975673,Generation:0,CreationTimestamp:2020-02-26 11:48:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc001d91e07 0xc001d91e08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d91e70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d91e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-02-26 11:48:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-26 11:48:46 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ec8f322113945601a2e3627331adae4dfd2347694e01c817074c6ca17ef466ad}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.794: INFO: Pod "nginx-deployment-85ddf47c5d-ns2bb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ns2bb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-ns2bb,UID:f7d7668f-588d-11ea-a994-fa163e34d433,ResourceVersion:22975810,Generation:0,CreationTimestamp:2020-02-26 11:48:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc001d91f57 0xc001d91f58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d91fc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d91fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-26 11:49:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.795: INFO: Pod "nginx-deployment-85ddf47c5d-phnw2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-phnw2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-phnw2,UID:dde60211-588d-11ea-a994-fa163e34d433,ResourceVersion:22975685,Generation:0,CreationTimestamp:2020-02-26 11:48:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc0019da097 0xc0019da098}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0019da180} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0019da1a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-02-26 11:48:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-26 11:48:46 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://7424fd9e23f138b017b95b5b24f509a4db41991d3175e860241b678d8c34b71c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.795: INFO: Pod "nginx-deployment-85ddf47c5d-pr7qw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pr7qw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-pr7qw,UID:f9aa4db1-588d-11ea-a994-fa163e34d433,ResourceVersion:22975780,Generation:0,CreationTimestamp:2020-02-26 11:48:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc0019da267 0xc0019da268}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0019da410} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0019da430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:00 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.795: INFO: Pod "nginx-deployment-85ddf47c5d-sbjnt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sbjnt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-sbjnt,UID:f994972a-588d-11ea-a994-fa163e34d433,ResourceVersion:22975772,Generation:0,CreationTimestamp:2020-02-26 11:48:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc0019da4a7 0xc0019da4a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0019da510} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0019da530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:59 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.795: INFO: Pod "nginx-deployment-85ddf47c5d-srw8c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-srw8c,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-srw8c,UID:f9aab81d-588d-11ea-a994-fa163e34d433,ResourceVersion:22975797,Generation:0,CreationTimestamp:2020-02-26 11:48:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc0019da667 0xc0019da668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0019da6d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0019da6f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:00 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.796: INFO: Pod "nginx-deployment-85ddf47c5d-swzvp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-swzvp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-swzvp,UID:dde6954d-588d-11ea-a994-fa163e34d433,ResourceVersion:22975670,Generation:0,CreationTimestamp:2020-02-26 11:48:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc0019da7f7 0xc0019da7f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0019da860} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0019da890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-02-26 11:48:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-26 11:48:42 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e1ecc578443b909a9a281a7d7d8be007134a7694336568ea4764e3aaf88cb22d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.796: INFO: Pod "nginx-deployment-85ddf47c5d-vmsgb" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vmsgb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-vmsgb,UID:dddb377f-588d-11ea-a994-fa163e34d433,ResourceVersion:22975677,Generation:0,CreationTimestamp:2020-02-26 11:48:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc0019da977 0xc0019da978}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0019da9f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0019daa10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-02-26 11:48:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-26 11:48:46 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ce72f01ed8bea5e06b8ad7ebd5ed55e5d6dc57f50b494290a5e5c1455cd69da6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.796: INFO: Pod "nginx-deployment-85ddf47c5d-xpcpt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xpcpt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-xpcpt,UID:f994ccf0-588d-11ea-a994-fa163e34d433,ResourceVersion:22975826,Generation:0,CreationTimestamp:2020-02-26 11:48:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc0019daad7 0xc0019daad8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0019dab50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0019dab70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:48:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-26 11:49:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 26 11:49:04.796: INFO: Pod "nginx-deployment-85ddf47c5d-xtmk2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xtmk2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvgq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvgq/pods/nginx-deployment-85ddf47c5d-xtmk2,UID:f9aa78d3-588d-11ea-a994-fa163e34d433,ResourceVersion:22975800,Generation:0,CreationTimestamp:2020-02-26 11:48:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d ddd0ab10-588d-11ea-a994-fa163e34d433 0xc0019dac57 0xc0019dac58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-h2h7m {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h2h7m,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h2h7m true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0019dacc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0019dace0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:49:00 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:49:04.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-hbvgq" for this suite.
Feb 26 11:49:53.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:49:53.294: INFO: namespace: e2e-tests-deployment-hbvgq, resource: bindings, ignored listing per whitelist
Feb 26 11:49:53.409: INFO: namespace e2e-tests-deployment-hbvgq deletion completed in 48.374952491s

• [SLOW TEST:101.315 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:49:53.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:50:20.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-m662s" for this suite.
Feb 26 11:51:14.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:51:14.823: INFO: namespace: e2e-tests-kubelet-test-m662s, resource: bindings, ignored listing per whitelist
Feb 26 11:51:14.879: INFO: namespace e2e-tests-kubelet-test-m662s deletion completed in 54.21434574s

• [SLOW TEST:81.470 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:51:14.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-hz9qc
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-hz9qc
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-hz9qc
Feb 26 11:51:15.184: INFO: Found 0 stateful pods, waiting for 1
Feb 26 11:51:25.214: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Feb 26 11:51:35.204: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 26 11:51:35.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 26 11:51:36.160: INFO: stderr: "I0226 11:51:35.451910    2244 log.go:172] (0xc00014c630) (0xc0007165a0) Create stream\nI0226 11:51:35.452803    2244 log.go:172] (0xc00014c630) (0xc0007165a0) Stream added, broadcasting: 1\nI0226 11:51:35.466403    2244 log.go:172] (0xc00014c630) Reply frame received for 1\nI0226 11:51:35.466492    2244 log.go:172] (0xc00014c630) (0xc0006f6000) Create stream\nI0226 11:51:35.466514    2244 log.go:172] (0xc00014c630) (0xc0006f6000) Stream added, broadcasting: 3\nI0226 11:51:35.474972    2244 log.go:172] (0xc00014c630) Reply frame received for 3\nI0226 11:51:35.475028    2244 log.go:172] (0xc00014c630) (0xc0006c0b40) Create stream\nI0226 11:51:35.475053    2244 log.go:172] (0xc00014c630) (0xc0006c0b40) Stream added, broadcasting: 5\nI0226 11:51:35.477479    2244 log.go:172] (0xc00014c630) Reply frame received for 5\nI0226 11:51:36.013355    2244 log.go:172] (0xc00014c630) Data frame received for 3\nI0226 11:51:36.013490    2244 log.go:172] (0xc0006f6000) (3) Data frame handling\nI0226 11:51:36.013581    2244 log.go:172] (0xc0006f6000) (3) Data frame sent\nI0226 11:51:36.150993    2244 log.go:172] (0xc00014c630) (0xc0006f6000) Stream removed, broadcasting: 3\nI0226 11:51:36.151253    2244 log.go:172] (0xc00014c630) Data frame received for 1\nI0226 11:51:36.151278    2244 log.go:172] (0xc0007165a0) (1) Data frame handling\nI0226 11:51:36.151296    2244 log.go:172] (0xc0007165a0) (1) Data frame sent\nI0226 11:51:36.151355    2244 log.go:172] (0xc00014c630) (0xc0007165a0) Stream removed, broadcasting: 1\nI0226 11:51:36.151490    2244 log.go:172] (0xc00014c630) (0xc0006c0b40) Stream removed, broadcasting: 5\nI0226 11:51:36.151566    2244 log.go:172] (0xc00014c630) Go away received\nI0226 11:51:36.151663    2244 log.go:172] (0xc00014c630) (0xc0007165a0) Stream removed, broadcasting: 1\nI0226 11:51:36.151697    2244 log.go:172] (0xc00014c630) (0xc0006f6000) Stream removed, broadcasting: 3\nI0226 11:51:36.151713    2244 log.go:172] (0xc00014c630) (0xc0006c0b40) Stream removed, broadcasting: 5\n"
Feb 26 11:51:36.160: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 26 11:51:36.161: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 26 11:51:36.223: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 26 11:51:46.251: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 26 11:51:46.251: INFO: Waiting for statefulset status.replicas updated to 0
Feb 26 11:51:46.347: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 26 11:51:46.347: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:15 +0000 UTC  }]
Feb 26 11:51:46.347: INFO: 
Feb 26 11:51:46.347: INFO: StatefulSet ss has not reached scale 3, at 1
Feb 26 11:51:47.374: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.968811774s
Feb 26 11:51:48.491: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.941434527s
Feb 26 11:51:49.662: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.824606612s
Feb 26 11:51:50.688: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.653633236s
Feb 26 11:51:51.704: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.627305144s
Feb 26 11:51:53.327: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.611455805s
Feb 26 11:51:54.625: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.988409798s
Feb 26 11:51:55.645: INFO: Verifying statefulset ss doesn't scale past 3 for another 689.949989ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-hz9qc
Feb 26 11:51:56.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:51:57.395: INFO: stderr: "I0226 11:51:56.984763    2267 log.go:172] (0xc000138630) (0xc000401220) Create stream\nI0226 11:51:56.984896    2267 log.go:172] (0xc000138630) (0xc000401220) Stream added, broadcasting: 1\nI0226 11:51:57.039338    2267 log.go:172] (0xc000138630) Reply frame received for 1\nI0226 11:51:57.039414    2267 log.go:172] (0xc000138630) (0xc0004012c0) Create stream\nI0226 11:51:57.039427    2267 log.go:172] (0xc000138630) (0xc0004012c0) Stream added, broadcasting: 3\nI0226 11:51:57.043229    2267 log.go:172] (0xc000138630) Reply frame received for 3\nI0226 11:51:57.043321    2267 log.go:172] (0xc000138630) (0xc000312000) Create stream\nI0226 11:51:57.043370    2267 log.go:172] (0xc000138630) (0xc000312000) Stream added, broadcasting: 5\nI0226 11:51:57.044930    2267 log.go:172] (0xc000138630) Reply frame received for 5\nI0226 11:51:57.256002    2267 log.go:172] (0xc000138630) Data frame received for 3\nI0226 11:51:57.256069    2267 log.go:172] (0xc0004012c0) (3) Data frame handling\nI0226 11:51:57.256093    2267 log.go:172] (0xc0004012c0) (3) Data frame sent\nI0226 11:51:57.392048    2267 log.go:172] (0xc000138630) (0xc0004012c0) Stream removed, broadcasting: 3\nI0226 11:51:57.392136    2267 log.go:172] (0xc000138630) Data frame received for 1\nI0226 11:51:57.392147    2267 log.go:172] (0xc000401220) (1) Data frame handling\nI0226 11:51:57.392155    2267 log.go:172] (0xc000401220) (1) Data frame sent\nI0226 11:51:57.392161    2267 log.go:172] (0xc000138630) (0xc000401220) Stream removed, broadcasting: 1\nI0226 11:51:57.392185    2267 log.go:172] (0xc000138630) (0xc000312000) Stream removed, broadcasting: 5\nI0226 11:51:57.392232    2267 log.go:172] (0xc000138630) Go away received\nI0226 11:51:57.392360    2267 log.go:172] (0xc000138630) (0xc000401220) Stream removed, broadcasting: 1\nI0226 11:51:57.392369    2267 log.go:172] (0xc000138630) (0xc0004012c0) Stream removed, broadcasting: 3\nI0226 11:51:57.392375    2267 log.go:172] (0xc000138630) (0xc000312000) Stream removed, broadcasting: 5\n"
Feb 26 11:51:57.396: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 26 11:51:57.396: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 26 11:51:57.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:51:57.736: INFO: stderr: "I0226 11:51:57.531523    2288 log.go:172] (0xc00061a4d0) (0xc0006b8780) Create stream\nI0226 11:51:57.531692    2288 log.go:172] (0xc00061a4d0) (0xc0006b8780) Stream added, broadcasting: 1\nI0226 11:51:57.542741    2288 log.go:172] (0xc00061a4d0) Reply frame received for 1\nI0226 11:51:57.542776    2288 log.go:172] (0xc00061a4d0) (0xc0006b8000) Create stream\nI0226 11:51:57.542785    2288 log.go:172] (0xc00061a4d0) (0xc0006b8000) Stream added, broadcasting: 3\nI0226 11:51:57.545052    2288 log.go:172] (0xc00061a4d0) Reply frame received for 3\nI0226 11:51:57.545075    2288 log.go:172] (0xc00061a4d0) (0xc0005a6000) Create stream\nI0226 11:51:57.545085    2288 log.go:172] (0xc00061a4d0) (0xc0005a6000) Stream added, broadcasting: 5\nI0226 11:51:57.545886    2288 log.go:172] (0xc00061a4d0) Reply frame received for 5\nI0226 11:51:57.633030    2288 log.go:172] (0xc00061a4d0) Data frame received for 3\nI0226 11:51:57.633048    2288 log.go:172] (0xc0006b8000) (3) Data frame handling\nI0226 11:51:57.633053    2288 log.go:172] (0xc0006b8000) (3) Data frame sent\nI0226 11:51:57.633062    2288 log.go:172] (0xc00061a4d0) Data frame received for 5\nI0226 11:51:57.633074    2288 log.go:172] (0xc0005a6000) (5) Data frame handling\nI0226 11:51:57.633082    2288 log.go:172] (0xc0005a6000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0226 11:51:57.729942    2288 log.go:172] (0xc00061a4d0) (0xc0006b8000) Stream removed, broadcasting: 3\nI0226 11:51:57.730123    2288 log.go:172] (0xc00061a4d0) (0xc0005a6000) Stream removed, broadcasting: 5\nI0226 11:51:57.730141    2288 log.go:172] (0xc00061a4d0) Data frame received for 1\nI0226 11:51:57.730172    2288 log.go:172] (0xc0006b8780) (1) Data frame handling\nI0226 11:51:57.730188    2288 log.go:172] (0xc0006b8780) (1) Data frame sent\nI0226 11:51:57.730218    2288 log.go:172] (0xc00061a4d0) (0xc0006b8780) Stream removed, broadcasting: 1\nI0226 11:51:57.730230    2288 log.go:172] (0xc00061a4d0) Go away received\nI0226 11:51:57.730438    2288 log.go:172] (0xc00061a4d0) (0xc0006b8780) Stream removed, broadcasting: 1\nI0226 11:51:57.730452    2288 log.go:172] (0xc00061a4d0) (0xc0006b8000) Stream removed, broadcasting: 3\nI0226 11:51:57.730461    2288 log.go:172] (0xc00061a4d0) (0xc0005a6000) Stream removed, broadcasting: 5\n"
Feb 26 11:51:57.736: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 26 11:51:57.736: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 26 11:51:57.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:51:58.211: INFO: stderr: "I0226 11:51:57.959171    2309 log.go:172] (0xc0005800b0) (0xc0008b25a0) Create stream\nI0226 11:51:57.959348    2309 log.go:172] (0xc0005800b0) (0xc0008b25a0) Stream added, broadcasting: 1\nI0226 11:51:57.964293    2309 log.go:172] (0xc0005800b0) Reply frame received for 1\nI0226 11:51:57.964336    2309 log.go:172] (0xc0005800b0) (0xc0004d6b40) Create stream\nI0226 11:51:57.964343    2309 log.go:172] (0xc0005800b0) (0xc0004d6b40) Stream added, broadcasting: 3\nI0226 11:51:57.965374    2309 log.go:172] (0xc0005800b0) Reply frame received for 3\nI0226 11:51:57.965396    2309 log.go:172] (0xc0005800b0) (0xc0006dc000) Create stream\nI0226 11:51:57.965403    2309 log.go:172] (0xc0005800b0) (0xc0006dc000) Stream added, broadcasting: 5\nI0226 11:51:57.966354    2309 log.go:172] (0xc0005800b0) Reply frame received for 5\nI0226 11:51:58.062433    2309 log.go:172] (0xc0005800b0) Data frame received for 3\nI0226 11:51:58.062484    2309 log.go:172] (0xc0004d6b40) (3) Data frame handling\nI0226 11:51:58.062497    2309 log.go:172] (0xc0004d6b40) (3) Data frame sent\nI0226 11:51:58.062895    2309 log.go:172] (0xc0005800b0) Data frame received for 5\nI0226 11:51:58.062936    2309 log.go:172] (0xc0006dc000) (5) Data frame handling\nI0226 11:51:58.062955    2309 log.go:172] (0xc0006dc000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0226 11:51:58.203804    2309 log.go:172] (0xc0005800b0) Data frame received for 1\nI0226 11:51:58.203855    2309 log.go:172] (0xc0008b25a0) (1) Data frame handling\nI0226 11:51:58.203884    2309 log.go:172] (0xc0008b25a0) (1) Data frame sent\nI0226 11:51:58.203895    2309 log.go:172] (0xc0005800b0) (0xc0008b25a0) Stream removed, broadcasting: 1\nI0226 11:51:58.205563    2309 log.go:172] (0xc0005800b0) (0xc0006dc000) Stream removed, broadcasting: 5\nI0226 11:51:58.205628    2309 log.go:172] (0xc0005800b0) (0xc0004d6b40) Stream removed, broadcasting: 3\nI0226 11:51:58.205646    2309 log.go:172] (0xc0005800b0) Go away received\nI0226 11:51:58.205663    2309 log.go:172] (0xc0005800b0) (0xc0008b25a0) Stream removed, broadcasting: 1\nI0226 11:51:58.205672    2309 log.go:172] (0xc0005800b0) (0xc0004d6b40) Stream removed, broadcasting: 3\nI0226 11:51:58.205677    2309 log.go:172] (0xc0005800b0) (0xc0006dc000) Stream removed, broadcasting: 5\n"
Feb 26 11:51:58.211: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 26 11:51:58.211: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 26 11:51:58.229: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 11:51:58.229: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Pending - Ready=false
Feb 26 11:52:08.252: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 11:52:08.252: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 11:52:08.252: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 26 11:52:08.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 26 11:52:09.227: INFO: stderr: "I0226 11:52:08.610415    2331 log.go:172] (0xc000138790) (0xc000645540) Create stream\nI0226 11:52:08.610726    2331 log.go:172] (0xc000138790) (0xc000645540) Stream added, broadcasting: 1\nI0226 11:52:08.617805    2331 log.go:172] (0xc000138790) Reply frame received for 1\nI0226 11:52:08.617848    2331 log.go:172] (0xc000138790) (0xc0006455e0) Create stream\nI0226 11:52:08.617854    2331 log.go:172] (0xc000138790) (0xc0006455e0) Stream added, broadcasting: 3\nI0226 11:52:08.619504    2331 log.go:172] (0xc000138790) Reply frame received for 3\nI0226 11:52:08.619535    2331 log.go:172] (0xc000138790) (0xc000645680) Create stream\nI0226 11:52:08.619549    2331 log.go:172] (0xc000138790) (0xc000645680) Stream added, broadcasting: 5\nI0226 11:52:08.621539    2331 log.go:172] (0xc000138790) Reply frame received for 5\nI0226 11:52:08.925740    2331 log.go:172] (0xc000138790) Data frame received for 3\nI0226 11:52:08.926183    2331 log.go:172] (0xc0006455e0) (3) Data frame handling\nI0226 11:52:08.926197    2331 log.go:172] (0xc0006455e0) (3) Data frame sent\nI0226 11:52:09.218523    2331 log.go:172] (0xc000138790) (0xc0006455e0) Stream removed, broadcasting: 3\nI0226 11:52:09.219232    2331 log.go:172] (0xc000138790) Data frame received for 1\nI0226 11:52:09.219317    2331 log.go:172] (0xc000645540) (1) Data frame handling\nI0226 11:52:09.219351    2331 log.go:172] (0xc000645540) (1) Data frame sent\nI0226 11:52:09.219409    2331 log.go:172] (0xc000138790) (0xc000645680) Stream removed, broadcasting: 5\nI0226 11:52:09.219472    2331 log.go:172] (0xc000138790) (0xc000645540) Stream removed, broadcasting: 1\nI0226 11:52:09.219497    2331 log.go:172] (0xc000138790) Go away received\nI0226 11:52:09.219990    2331 log.go:172] (0xc000138790) (0xc000645540) Stream removed, broadcasting: 1\nI0226 11:52:09.220015    2331 log.go:172] (0xc000138790) (0xc0006455e0) Stream removed, broadcasting: 3\nI0226 11:52:09.220021    2331 log.go:172] (0xc000138790) (0xc000645680) Stream removed, broadcasting: 5\n"
Feb 26 11:52:09.227: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 26 11:52:09.227: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 26 11:52:09.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 26 11:52:09.688: INFO: stderr: "I0226 11:52:09.428032    2353 log.go:172] (0xc0006b2370) (0xc0006da640) Create stream\nI0226 11:52:09.428128    2353 log.go:172] (0xc0006b2370) (0xc0006da640) Stream added, broadcasting: 1\nI0226 11:52:09.434391    2353 log.go:172] (0xc0006b2370) Reply frame received for 1\nI0226 11:52:09.434526    2353 log.go:172] (0xc0006b2370) (0xc0004d2c80) Create stream\nI0226 11:52:09.434604    2353 log.go:172] (0xc0006b2370) (0xc0004d2c80) Stream added, broadcasting: 3\nI0226 11:52:09.436494    2353 log.go:172] (0xc0006b2370) Reply frame received for 3\nI0226 11:52:09.436521    2353 log.go:172] (0xc0006b2370) (0xc000116000) Create stream\nI0226 11:52:09.436530    2353 log.go:172] (0xc0006b2370) (0xc000116000) Stream added, broadcasting: 5\nI0226 11:52:09.438288    2353 log.go:172] (0xc0006b2370) Reply frame received for 5\nI0226 11:52:09.569462    2353 log.go:172] (0xc0006b2370) Data frame received for 3\nI0226 11:52:09.569487    2353 log.go:172] (0xc0004d2c80) (3) Data frame handling\nI0226 11:52:09.569495    2353 log.go:172] (0xc0004d2c80) (3) Data frame sent\nI0226 11:52:09.683676    2353 log.go:172] (0xc0006b2370) (0xc000116000) Stream removed, broadcasting: 5\nI0226 11:52:09.683769    2353 log.go:172] (0xc0006b2370) Data frame received for 1\nI0226 11:52:09.683791    2353 log.go:172] (0xc0006b2370) (0xc0004d2c80) Stream removed, broadcasting: 3\nI0226 11:52:09.683829    2353 log.go:172] (0xc0006da640) (1) Data frame handling\nI0226 11:52:09.683845    2353 log.go:172] (0xc0006da640) (1) Data frame sent\nI0226 11:52:09.683851    2353 log.go:172] (0xc0006b2370) (0xc0006da640) Stream removed, broadcasting: 1\nI0226 11:52:09.683859    2353 log.go:172] (0xc0006b2370) Go away received\nI0226 11:52:09.684035    2353 log.go:172] (0xc0006b2370) (0xc0006da640) Stream removed, broadcasting: 1\nI0226 11:52:09.684044    2353 log.go:172] (0xc0006b2370) (0xc0004d2c80) Stream removed, broadcasting: 3\nI0226 11:52:09.684050    2353 log.go:172] (0xc0006b2370) (0xc000116000) Stream removed, broadcasting: 5\n"
Feb 26 11:52:09.688: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 26 11:52:09.688: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 26 11:52:09.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 26 11:52:10.328: INFO: stderr: "I0226 11:52:09.943748    2374 log.go:172] (0xc0006904d0) (0xc0005ac960) Create stream\nI0226 11:52:09.943869    2374 log.go:172] (0xc0006904d0) (0xc0005ac960) Stream added, broadcasting: 1\nI0226 11:52:09.952193    2374 log.go:172] (0xc0006904d0) Reply frame received for 1\nI0226 11:52:09.952212    2374 log.go:172] (0xc0006904d0) (0xc0003aa280) Create stream\nI0226 11:52:09.952218    2374 log.go:172] (0xc0006904d0) (0xc0003aa280) Stream added, broadcasting: 3\nI0226 11:52:09.953589    2374 log.go:172] (0xc0006904d0) Reply frame received for 3\nI0226 11:52:09.953604    2374 log.go:172] (0xc0006904d0) (0xc0005ac000) Create stream\nI0226 11:52:09.953609    2374 log.go:172] (0xc0006904d0) (0xc0005ac000) Stream added, broadcasting: 5\nI0226 11:52:09.955137    2374 log.go:172] (0xc0006904d0) Reply frame received for 5\nI0226 11:52:10.187440    2374 log.go:172] (0xc0006904d0) Data frame received for 3\nI0226 11:52:10.187464    2374 log.go:172] (0xc0003aa280) (3) Data frame handling\nI0226 11:52:10.187471    2374 log.go:172] (0xc0003aa280) (3) Data frame sent\nI0226 11:52:10.323246    2374 log.go:172] (0xc0006904d0) (0xc0003aa280) Stream removed, broadcasting: 3\nI0226 11:52:10.323318    2374 log.go:172] (0xc0006904d0) (0xc0005ac000) Stream removed, broadcasting: 5\nI0226 11:52:10.323352    2374 log.go:172] (0xc0006904d0) Data frame received for 1\nI0226 11:52:10.323369    2374 log.go:172] (0xc0005ac960) (1) Data frame handling\nI0226 11:52:10.323382    2374 log.go:172] (0xc0005ac960) (1) Data frame sent\nI0226 11:52:10.323389    2374 log.go:172] (0xc0006904d0) (0xc0005ac960) Stream removed, broadcasting: 1\nI0226 11:52:10.323399    2374 log.go:172] (0xc0006904d0) Go away received\nI0226 11:52:10.323749    2374 log.go:172] (0xc0006904d0) (0xc0005ac960) Stream removed, broadcasting: 1\nI0226 11:52:10.323761    2374 log.go:172] (0xc0006904d0) (0xc0003aa280) Stream removed, broadcasting: 3\nI0226 11:52:10.323766    2374 log.go:172] (0xc0006904d0) (0xc0005ac000) Stream removed, broadcasting: 5\n"
Feb 26 11:52:10.328: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 26 11:52:10.328: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 26 11:52:10.328: INFO: Waiting for statefulset status.replicas updated to 0
Feb 26 11:52:10.363: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb 26 11:52:20.394: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 26 11:52:20.394: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 26 11:52:20.394: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 26 11:52:20.526: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 26 11:52:20.526: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:15 +0000 UTC  }]
Feb 26 11:52:20.527: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  }]
Feb 26 11:52:20.527: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  }]
Feb 26 11:52:20.527: INFO: 
Feb 26 11:52:20.527: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 26 11:52:22.456: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 26 11:52:22.456: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:15 +0000 UTC  }]
Feb 26 11:52:22.456: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  }]
Feb 26 11:52:22.456: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  }]
Feb 26 11:52:22.456: INFO: 
Feb 26 11:52:22.456: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 26 11:52:23.493: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 26 11:52:23.493: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:15 +0000 UTC  }]
Feb 26 11:52:23.494: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  }]
Feb 26 11:52:23.494: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  }]
Feb 26 11:52:23.494: INFO: 
Feb 26 11:52:23.494: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 26 11:52:24.523: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 26 11:52:24.523: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:15 +0000 UTC  }]
Feb 26 11:52:24.523: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  }]
Feb 26 11:52:24.523: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  }]
Feb 26 11:52:24.524: INFO: 
Feb 26 11:52:24.524: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 26 11:52:26.459: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 26 11:52:26.460: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:15 +0000 UTC  }]
Feb 26 11:52:26.460: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  }]
Feb 26 11:52:26.460: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  }]
Feb 26 11:52:26.461: INFO: 
Feb 26 11:52:26.461: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 26 11:52:28.069: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 26 11:52:28.069: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:15 +0000 UTC  }]
Feb 26 11:52:28.069: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  }]
Feb 26 11:52:28.069: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  }]
Feb 26 11:52:28.069: INFO: 
Feb 26 11:52:28.069: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 26 11:52:29.087: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 26 11:52:29.087: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:15 +0000 UTC  }]
Feb 26 11:52:29.087: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  }]
Feb 26 11:52:29.087: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  }]
Feb 26 11:52:29.087: INFO: 
Feb 26 11:52:29.087: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 26 11:52:30.111: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 26 11:52:30.111: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:15 +0000 UTC  }]
Feb 26 11:52:30.111: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  }]
Feb 26 11:52:30.111: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:52:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 11:51:46 +0000 UTC  }]
Feb 26 11:52:30.111: INFO: 
Feb 26 11:52:30.111: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-hz9qc
Feb 26 11:52:31.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:52:31.346: INFO: rc: 1
Feb 26 11:52:31.347: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001d228a0 exit status 1   true [0xc0009f6028 0xc0009f6040 0xc0009f6058] [0xc0009f6028 0xc0009f6040 0xc0009f6058] [0xc0009f6038 0xc0009f6050] [0x935700 0x935700] 0xc001564b40 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb 26 11:52:41.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:52:41.474: INFO: rc: 1
Feb 26 11:52:41.475: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d22a80 exit status 1   true [0xc0009f6060 0xc0009f6078 0xc0009f6090] [0xc0009f6060 0xc0009f6078 0xc0009f6090] [0xc0009f6070 0xc0009f6088] [0x935700 0x935700] 0xc001564de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:52:51.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:52:51.624: INFO: rc: 1
Feb 26 11:52:51.624: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d22bd0 exit status 1   true [0xc0009f6098 0xc0009f60b0 0xc0009f60c8] [0xc0009f6098 0xc0009f60b0 0xc0009f60c8] [0xc0009f60a8 0xc0009f60c0] [0x935700 0x935700] 0xc001565080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:53:01.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:53:02.296: INFO: rc: 1
Feb 26 11:53:02.297: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b42840 exit status 1   true [0xc000038330 0xc000038458 0xc0000384d8] [0xc000038330 0xc000038458 0xc0000384d8] [0xc000038400 0xc0000384b0] [0x935700 0x935700] 0xc002292720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:53:12.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:53:12.426: INFO: rc: 1
Feb 26 11:53:12.427: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d22d20 exit status 1   true [0xc0009f60d0 0xc0009f60e8 0xc0009f6100] [0xc0009f60d0 0xc0009f60e8 0xc0009f6100] [0xc0009f60e0 0xc0009f60f8] [0x935700 0x935700] 0xc001565320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:53:22.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:53:22.571: INFO: rc: 1
Feb 26 11:53:22.572: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00160f3b0 exit status 1   true [0xc00183a000 0xc00183a018 0xc00183a030] [0xc00183a000 0xc00183a018 0xc00183a030] [0xc00183a010 0xc00183a028] [0x935700 0x935700] 0xc001db6360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:53:32.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:53:32.685: INFO: rc: 1
Feb 26 11:53:32.685: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d22ed0 exit status 1   true [0xc0009f6108 0xc0009f6120 0xc0009f6138] [0xc0009f6108 0xc0009f6120 0xc0009f6138] [0xc0009f6118 0xc0009f6130] [0x935700 0x935700] 0xc001565680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:53:42.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:53:42.783: INFO: rc: 1
Feb 26 11:53:42.783: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d22ff0 exit status 1   true [0xc0009f6140 0xc0009f6158 0xc0009f6170] [0xc0009f6140 0xc0009f6158 0xc0009f6170] [0xc0009f6150 0xc0009f6168] [0x935700 0x935700] 0xc001565920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:53:52.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:53:52.955: INFO: rc: 1
Feb 26 11:53:52.956: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00160f4d0 exit status 1   true [0xc00183a038 0xc00183a050 0xc00183a068] [0xc00183a038 0xc00183a050 0xc00183a068] [0xc00183a048 0xc00183a060] [0x935700 0x935700] 0xc001db6720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:54:02.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:54:03.060: INFO: rc: 1
Feb 26 11:54:03.061: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0019422a0 exit status 1   true [0xc00016ecc8 0xc00016ed88 0xc00016eea8] [0xc00016ecc8 0xc00016ed88 0xc00016eea8] [0xc00016ece8 0xc00016ee90] [0x935700 0x935700] 0xc002411aa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:54:13.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:54:13.165: INFO: rc: 1
Feb 26 11:54:13.166: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00160f650 exit status 1   true [0xc00183a070 0xc00183a088 0xc00183a0a0] [0xc00183a070 0xc00183a088 0xc00183a0a0] [0xc00183a080 0xc00183a098] [0x935700 0x935700] 0xc001db6ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:54:23.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:54:23.313: INFO: rc: 1
Feb 26 11:54:23.314: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b429c0 exit status 1   true [0xc0000384e8 0xc000038608 0xc0000386f8] [0xc0000384e8 0xc000038608 0xc0000386f8] [0xc0000385f0 0xc0000386b8] [0x935700 0x935700] 0xc002293da0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:54:33.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:54:33.492: INFO: rc: 1
Feb 26 11:54:33.493: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00160e1b0 exit status 1   true [0xc000038168 0xc000038210 0xc000038290] [0xc000038168 0xc000038210 0xc000038290] [0xc000038198 0xc000038250] [0x935700 0x935700] 0xc0020e1e60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:54:43.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:54:43.682: INFO: rc: 1
Feb 26 11:54:43.683: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001942120 exit status 1   true [0xc00016e000 0xc00016ecb0 0xc00016ece8] [0xc00016e000 0xc00016ecb0 0xc00016ece8] [0xc00016ebe0 0xc00016ece0] [0x935700 0x935700] 0xc001db6300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:54:53.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:54:53.870: INFO: rc: 1
Feb 26 11:54:53.871: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001942240 exit status 1   true [0xc00016ed88 0xc00016eea8 0xc00016ef58] [0xc00016ed88 0xc00016eea8 0xc00016ef58] [0xc00016ee90 0xc00016ef10] [0x935700 0x935700] 0xc001db66c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:55:03.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:55:04.037: INFO: rc: 1
Feb 26 11:55:04.038: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00160e300 exit status 1   true [0xc0000382c8 0xc000038330 0xc000038458] [0xc0000382c8 0xc000038330 0xc000038458] [0xc000038310 0xc000038400] [0x935700 0x935700] 0xc0022922a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:55:14.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:55:14.151: INFO: rc: 1
Feb 26 11:55:14.152: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00160e420 exit status 1   true [0xc000038480 0xc0000384e8 0xc000038608] [0xc000038480 0xc0000384e8 0xc000038608] [0xc0000384d8 0xc0000385f0] [0x935700 0x935700] 0xc002292780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:55:24.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:55:24.253: INFO: rc: 1
Feb 26 11:55:24.253: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001b42150 exit status 1   true [0xc00183a000 0xc00183a018 0xc00183a030] [0xc00183a000 0xc00183a018 0xc00183a030] [0xc00183a010 0xc00183a028] [0x935700 0x935700] 0xc002411a40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:55:34.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:55:34.366: INFO: rc: 1
Feb 26 11:55:34.366: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00160e540 exit status 1   true [0xc000038678 0xc000038728 0xc000038818] [0xc000038678 0xc000038728 0xc000038818] [0xc0000386f8 0xc000038800] [0x935700 0x935700] 0xc002293e00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:55:44.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:55:44.484: INFO: rc: 1
Feb 26 11:55:44.484: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0019423c0 exit status 1   true [0xc00016efc8 0xc00016f088 0xc00016f0e0] [0xc00016efc8 0xc00016f088 0xc00016f0e0] [0xc00016f070 0xc00016f0d8] [0x935700 0x935700] 0xc001db6a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:55:54.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:55:54.656: INFO: rc: 1
Feb 26 11:55:54.657: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00160e6f0 exit status 1   true [0xc000038870 0xc000038908 0xc0000389b0] [0xc000038870 0xc000038908 0xc0000389b0] [0xc0000388c0 0xc000038990] [0x935700 0x935700] 0xc0015648a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:56:04.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:56:04.801: INFO: rc: 1
Feb 26 11:56:04.802: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d22180 exit status 1   true [0xc0009f6000 0xc0009f6018 0xc0009f6030] [0xc0009f6000 0xc0009f6018 0xc0009f6030] [0xc0009f6010 0xc0009f6028] [0x935700 0x935700] 0xc001c474a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:56:14.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:56:14.949: INFO: rc: 1
Feb 26 11:56:14.949: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00160eba0 exit status 1   true [0xc000038a20 0xc000038a48 0xc000038aa8] [0xc000038a20 0xc000038a48 0xc000038aa8] [0xc000038a38 0xc000038a58] [0x935700 0x935700] 0xc001564c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:56:24.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:56:25.059: INFO: rc: 1
Feb 26 11:56:25.060: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001942510 exit status 1   true [0xc00016f0e8 0xc00016f158 0xc00016f180] [0xc00016f0e8 0xc00016f158 0xc00016f180] [0xc00016f148 0xc00016f178] [0x935700 0x935700] 0xc001db6d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:56:35.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:56:35.242: INFO: rc: 1
Feb 26 11:56:35.243: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001172150 exit status 1   true [0xc000bb8000 0xc000bb8018 0xc000bb8030] [0xc000bb8000 0xc000bb8018 0xc000bb8030] [0xc000bb8010 0xc000bb8028] [0x935700 0x935700] 0xc001b37320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:56:45.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:56:45.420: INFO: rc: 1
Feb 26 11:56:45.421: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00160e1e0 exit status 1   true [0xc00000e1f8 0xc000bb8048 0xc000bb8060] [0xc00000e1f8 0xc000bb8048 0xc000bb8060] [0xc000bb8040 0xc000bb8058] [0x935700 0x935700] 0xc0022922a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:56:55.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:56:55.565: INFO: rc: 1
Feb 26 11:56:55.565: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00160e330 exit status 1   true [0xc000bb8068 0xc000bb8080 0xc000bb8098] [0xc000bb8068 0xc000bb8080 0xc000bb8098] [0xc000bb8078 0xc000bb8090] [0x935700 0x935700] 0xc002292780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:57:05.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:57:05.709: INFO: rc: 1
Feb 26 11:57:05.710: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00160e4b0 exit status 1   true [0xc000bb80a0 0xc000bb80b8 0xc000bb80d0] [0xc000bb80a0 0xc000bb80b8 0xc000bb80d0] [0xc000bb80b0 0xc000bb80c8] [0x935700 0x935700] 0xc002293e00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:57:15.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:57:15.900: INFO: rc: 1
Feb 26 11:57:15.901: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d220f0 exit status 1   true [0xc00183a000 0xc00183a018 0xc00183a030] [0xc00183a000 0xc00183a018 0xc00183a030] [0xc00183a010 0xc00183a028] [0x935700 0x935700] 0xc0020e1e60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:57:25.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:57:26.027: INFO: rc: 1
Feb 26 11:57:26.028: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001172300 exit status 1   true [0xc0009f6000 0xc0009f6018 0xc0009f6030] [0xc0009f6000 0xc0009f6018 0xc0009f6030] [0xc0009f6010 0xc0009f6028] [0x935700 0x935700] 0xc001b37920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Feb 26 11:57:36.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hz9qc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 26 11:57:36.180: INFO: rc: 1
Feb 26 11:57:36.181: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Feb 26 11:57:36.181: INFO: Scaling statefulset ss to 0
Feb 26 11:57:36.210: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 26 11:57:36.215: INFO: Deleting all statefulset in ns e2e-tests-statefulset-hz9qc
Feb 26 11:57:36.221: INFO: Scaling statefulset ss to 0
Feb 26 11:57:36.234: INFO: Waiting for statefulset status.replicas updated to 0
Feb 26 11:57:36.237: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:57:36.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-hz9qc" for this suite.
Feb 26 11:57:44.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:57:44.775: INFO: namespace: e2e-tests-statefulset-hz9qc, resource: bindings, ignored listing per whitelist
Feb 26 11:57:44.887: INFO: namespace e2e-tests-statefulset-hz9qc deletion completed in 8.423206809s

• [SLOW TEST:390.008 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:57:44.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Feb 26 11:57:45.122: INFO: Waiting up to 5m0s for pod "client-containers-33372b3a-588f-11ea-8134-0242ac110008" in namespace "e2e-tests-containers-rrpw6" to be "success or failure"
Feb 26 11:57:45.142: INFO: Pod "client-containers-33372b3a-588f-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.60697ms
Feb 26 11:57:47.159: INFO: Pod "client-containers-33372b3a-588f-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03668631s
Feb 26 11:57:49.178: INFO: Pod "client-containers-33372b3a-588f-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056342407s
Feb 26 11:57:51.705: INFO: Pod "client-containers-33372b3a-588f-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.582744946s
Feb 26 11:57:53.757: INFO: Pod "client-containers-33372b3a-588f-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.634700765s
Feb 26 11:57:55.777: INFO: Pod "client-containers-33372b3a-588f-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.655250172s
STEP: Saw pod success
Feb 26 11:57:55.777: INFO: Pod "client-containers-33372b3a-588f-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 11:57:55.786: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-33372b3a-588f-11ea-8134-0242ac110008 container test-container: 
STEP: delete the pod
Feb 26 11:57:56.011: INFO: Waiting for pod client-containers-33372b3a-588f-11ea-8134-0242ac110008 to disappear
Feb 26 11:57:56.020: INFO: Pod client-containers-33372b3a-588f-11ea-8134-0242ac110008 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:57:56.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-rrpw6" for this suite.
Feb 26 11:58:02.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:58:02.303: INFO: namespace: e2e-tests-containers-rrpw6, resource: bindings, ignored listing per whitelist
Feb 26 11:58:02.330: INFO: namespace e2e-tests-containers-rrpw6 deletion completed in 6.300958084s

• [SLOW TEST:17.442 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:58:02.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 26 11:58:02.799: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3db7eed9-588f-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-kztmd" to be "success or failure"
Feb 26 11:58:02.888: INFO: Pod "downwardapi-volume-3db7eed9-588f-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 88.633769ms
Feb 26 11:58:04.904: INFO: Pod "downwardapi-volume-3db7eed9-588f-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104496522s
Feb 26 11:58:06.962: INFO: Pod "downwardapi-volume-3db7eed9-588f-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162491333s
Feb 26 11:58:08.974: INFO: Pod "downwardapi-volume-3db7eed9-588f-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.173960126s
Feb 26 11:58:10.988: INFO: Pod "downwardapi-volume-3db7eed9-588f-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.188768074s
Feb 26 11:58:13.014: INFO: Pod "downwardapi-volume-3db7eed9-588f-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.214636931s
STEP: Saw pod success
Feb 26 11:58:13.014: INFO: Pod "downwardapi-volume-3db7eed9-588f-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 11:58:13.024: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3db7eed9-588f-11ea-8134-0242ac110008 container client-container: 
STEP: delete the pod
Feb 26 11:58:13.583: INFO: Waiting for pod downwardapi-volume-3db7eed9-588f-11ea-8134-0242ac110008 to disappear
Feb 26 11:58:13.595: INFO: Pod downwardapi-volume-3db7eed9-588f-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:58:13.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kztmd" for this suite.
Feb 26 11:58:20.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:58:20.140: INFO: namespace: e2e-tests-projected-kztmd, resource: bindings, ignored listing per whitelist
Feb 26 11:58:20.151: INFO: namespace e2e-tests-projected-kztmd deletion completed in 6.168473377s

• [SLOW TEST:17.821 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:58:20.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 11:59:20.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-9fzhd" for this suite.
Feb 26 11:59:44.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 11:59:44.813: INFO: namespace: e2e-tests-container-probe-9fzhd, resource: bindings, ignored listing per whitelist
Feb 26 11:59:44.831: INFO: namespace e2e-tests-container-probe-9fzhd deletion completed in 24.204795809s

• [SLOW TEST:84.680 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 11:59:44.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Feb 26 11:59:45.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-89vb5'
Feb 26 11:59:47.141: INFO: stderr: ""
Feb 26 11:59:47.142: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 26 11:59:47.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-89vb5'
Feb 26 11:59:47.369: INFO: stderr: ""
Feb 26 11:59:47.370: INFO: stdout: "update-demo-nautilus-2gjsn update-demo-nautilus-qpw9h "
Feb 26 11:59:47.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2gjsn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-89vb5'
Feb 26 11:59:47.570: INFO: stderr: ""
Feb 26 11:59:47.570: INFO: stdout: ""
Feb 26 11:59:47.570: INFO: update-demo-nautilus-2gjsn is created but not running
Feb 26 11:59:52.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-89vb5'
Feb 26 11:59:52.677: INFO: stderr: ""
Feb 26 11:59:52.677: INFO: stdout: "update-demo-nautilus-2gjsn update-demo-nautilus-qpw9h "
Feb 26 11:59:52.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2gjsn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-89vb5'
Feb 26 11:59:52.814: INFO: stderr: ""
Feb 26 11:59:52.815: INFO: stdout: ""
Feb 26 11:59:52.815: INFO: update-demo-nautilus-2gjsn is created but not running
Feb 26 11:59:57.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-89vb5'
Feb 26 11:59:58.054: INFO: stderr: ""
Feb 26 11:59:58.054: INFO: stdout: "update-demo-nautilus-2gjsn update-demo-nautilus-qpw9h "
Feb 26 11:59:58.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2gjsn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-89vb5'
Feb 26 11:59:58.203: INFO: stderr: ""
Feb 26 11:59:58.203: INFO: stdout: ""
Feb 26 11:59:58.203: INFO: update-demo-nautilus-2gjsn is created but not running
Feb 26 12:00:03.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-89vb5'
Feb 26 12:00:03.352: INFO: stderr: ""
Feb 26 12:00:03.352: INFO: stdout: "update-demo-nautilus-2gjsn update-demo-nautilus-qpw9h "
Feb 26 12:00:03.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2gjsn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-89vb5'
Feb 26 12:00:03.478: INFO: stderr: ""
Feb 26 12:00:03.478: INFO: stdout: "true"
Feb 26 12:00:03.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2gjsn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-89vb5'
Feb 26 12:00:03.577: INFO: stderr: ""
Feb 26 12:00:03.577: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 26 12:00:03.578: INFO: validating pod update-demo-nautilus-2gjsn
Feb 26 12:00:03.682: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 26 12:00:03.682: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 26 12:00:03.682: INFO: update-demo-nautilus-2gjsn is verified up and running
Feb 26 12:00:03.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qpw9h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-89vb5'
Feb 26 12:00:03.796: INFO: stderr: ""
Feb 26 12:00:03.796: INFO: stdout: "true"
Feb 26 12:00:03.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qpw9h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-89vb5'
Feb 26 12:00:03.948: INFO: stderr: ""
Feb 26 12:00:03.949: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 26 12:00:03.949: INFO: validating pod update-demo-nautilus-qpw9h
Feb 26 12:00:03.977: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 26 12:00:03.977: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 26 12:00:03.977: INFO: update-demo-nautilus-qpw9h is verified up and running
STEP: rolling-update to new replication controller
Feb 26 12:00:03.981: INFO: scanned /root for discovery docs: 
Feb 26 12:00:03.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-89vb5'
Feb 26 12:00:40.308: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 26 12:00:40.308: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 26 12:00:40.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-89vb5'
Feb 26 12:00:40.617: INFO: stderr: ""
Feb 26 12:00:40.617: INFO: stdout: "update-demo-kitten-mzvzm update-demo-kitten-r8jm4 update-demo-nautilus-qpw9h "
STEP: Replicas for name=update-demo: expected=2 actual=3
Feb 26 12:00:45.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-89vb5'
Feb 26 12:00:45.824: INFO: stderr: ""
Feb 26 12:00:45.824: INFO: stdout: "update-demo-kitten-mzvzm update-demo-kitten-r8jm4 "
Feb 26 12:00:45.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mzvzm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-89vb5'
Feb 26 12:00:45.945: INFO: stderr: ""
Feb 26 12:00:45.945: INFO: stdout: "true"
Feb 26 12:00:45.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mzvzm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-89vb5'
Feb 26 12:00:46.082: INFO: stderr: ""
Feb 26 12:00:46.082: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 26 12:00:46.082: INFO: validating pod update-demo-kitten-mzvzm
Feb 26 12:00:46.105: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 26 12:00:46.105: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 26 12:00:46.105: INFO: update-demo-kitten-mzvzm is verified up and running
Feb 26 12:00:46.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-r8jm4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-89vb5'
Feb 26 12:00:46.191: INFO: stderr: ""
Feb 26 12:00:46.191: INFO: stdout: "true"
Feb 26 12:00:46.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-r8jm4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-89vb5'
Feb 26 12:00:46.295: INFO: stderr: ""
Feb 26 12:00:46.295: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 26 12:00:46.295: INFO: validating pod update-demo-kitten-r8jm4
Feb 26 12:00:46.307: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 26 12:00:46.307: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 26 12:00:46.307: INFO: update-demo-kitten-r8jm4 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:00:46.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-89vb5" for this suite.
Feb 26 12:01:10.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:01:10.433: INFO: namespace: e2e-tests-kubectl-89vb5, resource: bindings, ignored listing per whitelist
Feb 26 12:01:10.536: INFO: namespace e2e-tests-kubectl-89vb5 deletion completed in 24.222516415s

• [SLOW TEST:85.704 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:01:10.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-blp2s
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 26 12:01:10.915: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 26 12:01:47.213: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-blp2s PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 26 12:01:47.214: INFO: >>> kubeConfig: /root/.kube/config
I0226 12:01:47.312905       9 log.go:172] (0xc0009c73f0) (0xc000747e00) Create stream
I0226 12:01:47.313195       9 log.go:172] (0xc0009c73f0) (0xc000747e00) Stream added, broadcasting: 1
I0226 12:01:47.322111       9 log.go:172] (0xc0009c73f0) Reply frame received for 1
I0226 12:01:47.322213       9 log.go:172] (0xc0009c73f0) (0xc00115d7c0) Create stream
I0226 12:01:47.322236       9 log.go:172] (0xc0009c73f0) (0xc00115d7c0) Stream added, broadcasting: 3
I0226 12:01:47.323866       9 log.go:172] (0xc0009c73f0) Reply frame received for 3
I0226 12:01:47.323917       9 log.go:172] (0xc0009c73f0) (0xc000747ea0) Create stream
I0226 12:01:47.323930       9 log.go:172] (0xc0009c73f0) (0xc000747ea0) Stream added, broadcasting: 5
I0226 12:01:47.325406       9 log.go:172] (0xc0009c73f0) Reply frame received for 5
I0226 12:01:48.712504       9 log.go:172] (0xc0009c73f0) Data frame received for 3
I0226 12:01:48.712599       9 log.go:172] (0xc00115d7c0) (3) Data frame handling
I0226 12:01:48.712626       9 log.go:172] (0xc00115d7c0) (3) Data frame sent
I0226 12:01:48.877960       9 log.go:172] (0xc0009c73f0) Data frame received for 1
I0226 12:01:48.878182       9 log.go:172] (0xc0009c73f0) (0xc00115d7c0) Stream removed, broadcasting: 3
I0226 12:01:48.878268       9 log.go:172] (0xc000747e00) (1) Data frame handling
I0226 12:01:48.878323       9 log.go:172] (0xc000747e00) (1) Data frame sent
I0226 12:01:48.878363       9 log.go:172] (0xc0009c73f0) (0xc000747ea0) Stream removed, broadcasting: 5
I0226 12:01:48.878458       9 log.go:172] (0xc0009c73f0) (0xc000747e00) Stream removed, broadcasting: 1
I0226 12:01:48.878504       9 log.go:172] (0xc0009c73f0) Go away received
I0226 12:01:48.879182       9 log.go:172] (0xc0009c73f0) (0xc000747e00) Stream removed, broadcasting: 1
I0226 12:01:48.879208       9 log.go:172] (0xc0009c73f0) (0xc00115d7c0) Stream removed, broadcasting: 3
I0226 12:01:48.879216       9 log.go:172] (0xc0009c73f0) (0xc000747ea0) Stream removed, broadcasting: 5
Feb 26 12:01:48.879: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:01:48.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-blp2s" for this suite.
Feb 26 12:02:12.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:02:13.088: INFO: namespace: e2e-tests-pod-network-test-blp2s, resource: bindings, ignored listing per whitelist
Feb 26 12:02:13.168: INFO: namespace e2e-tests-pod-network-test-blp2s deletion completed in 24.272110586s

• [SLOW TEST:62.630 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:02:13.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 26 12:02:13.328: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d314f8fb-588f-11ea-8134-0242ac110008" in namespace "e2e-tests-downward-api-fw8rl" to be "success or failure"
Feb 26 12:02:13.337: INFO: Pod "downwardapi-volume-d314f8fb-588f-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.758321ms
Feb 26 12:02:15.370: INFO: Pod "downwardapi-volume-d314f8fb-588f-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042564396s
Feb 26 12:02:17.393: INFO: Pod "downwardapi-volume-d314f8fb-588f-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064594747s
Feb 26 12:02:20.019: INFO: Pod "downwardapi-volume-d314f8fb-588f-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.690830644s
Feb 26 12:02:22.200: INFO: Pod "downwardapi-volume-d314f8fb-588f-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.872510386s
Feb 26 12:02:24.216: INFO: Pod "downwardapi-volume-d314f8fb-588f-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.88827884s
STEP: Saw pod success
Feb 26 12:02:24.216: INFO: Pod "downwardapi-volume-d314f8fb-588f-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:02:24.233: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d314f8fb-588f-11ea-8134-0242ac110008 container client-container: 
STEP: delete the pod
Feb 26 12:02:25.756: INFO: Waiting for pod downwardapi-volume-d314f8fb-588f-11ea-8134-0242ac110008 to disappear
Feb 26 12:02:25.778: INFO: Pod downwardapi-volume-d314f8fb-588f-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:02:25.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fw8rl" for this suite.
Feb 26 12:02:33.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:02:34.175: INFO: namespace: e2e-tests-downward-api-fw8rl, resource: bindings, ignored listing per whitelist
Feb 26 12:02:34.186: INFO: namespace e2e-tests-downward-api-fw8rl deletion completed in 8.397756904s

• [SLOW TEST:21.018 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:02:34.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-xd2mt
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-xd2mt to expose endpoints map[]
Feb 26 12:02:34.431: INFO: Get endpoints failed (46.52805ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb 26 12:02:35.455: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-xd2mt exposes endpoints map[] (1.070025791s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-xd2mt
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-xd2mt to expose endpoints map[pod1:[80]]
Feb 26 12:02:41.008: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.505026471s elapsed, will retry)
Feb 26 12:02:44.172: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-xd2mt exposes endpoints map[pod1:[80]] (8.669760337s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-xd2mt
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-xd2mt to expose endpoints map[pod1:[80] pod2:[80]]
Feb 26 12:02:49.614: INFO: Unexpected endpoints: found map[e049b6e3-588f-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (5.421717565s elapsed, will retry)
Feb 26 12:02:52.800: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-xd2mt exposes endpoints map[pod1:[80] pod2:[80]] (8.607508843s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-xd2mt
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-xd2mt to expose endpoints map[pod2:[80]]
Feb 26 12:02:52.934: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-xd2mt exposes endpoints map[pod2:[80]] (22.45565ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-xd2mt
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-xd2mt to expose endpoints map[]
Feb 26 12:02:54.096: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-xd2mt exposes endpoints map[] (1.143571014s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:02:54.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-xd2mt" for this suite.
Feb 26 12:03:18.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:03:18.642: INFO: namespace: e2e-tests-services-xd2mt, resource: bindings, ignored listing per whitelist
Feb 26 12:03:18.715: INFO: namespace e2e-tests-services-xd2mt deletion completed in 24.285032338s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:44.529 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:03:18.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Feb 26 12:03:18.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8bjmp'
Feb 26 12:03:19.242: INFO: stderr: ""
Feb 26 12:03:19.242: INFO: stdout: "pod/pause created\n"
Feb 26 12:03:19.242: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb 26 12:03:19.242: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-8bjmp" to be "running and ready"
Feb 26 12:03:19.273: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 30.547047ms
Feb 26 12:03:21.302: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059595421s
Feb 26 12:03:23.322: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079852513s
Feb 26 12:03:25.803: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.559991387s
Feb 26 12:03:27.960: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.717728863s
Feb 26 12:03:29.981: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.738713101s
Feb 26 12:03:29.981: INFO: Pod "pause" satisfied condition "running and ready"
Feb 26 12:03:29.981: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Feb 26 12:03:29.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-8bjmp'
Feb 26 12:03:30.214: INFO: stderr: ""
Feb 26 12:03:30.215: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb 26 12:03:30.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-8bjmp'
Feb 26 12:03:30.337: INFO: stderr: ""
Feb 26 12:03:30.337: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb 26 12:03:30.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-8bjmp'
Feb 26 12:03:30.479: INFO: stderr: ""
Feb 26 12:03:30.479: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb 26 12:03:30.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-8bjmp'
Feb 26 12:03:30.620: INFO: stderr: ""
Feb 26 12:03:30.620: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Feb 26 12:03:30.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8bjmp'
Feb 26 12:03:30.831: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 26 12:03:30.832: INFO: stdout: "pod \"pause\" force deleted\n"
Feb 26 12:03:30.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-8bjmp'
Feb 26 12:03:31.019: INFO: stderr: "No resources found.\n"
Feb 26 12:03:31.019: INFO: stdout: ""
Feb 26 12:03:31.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-8bjmp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 26 12:03:31.174: INFO: stderr: ""
Feb 26 12:03:31.174: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:03:31.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8bjmp" for this suite.
Feb 26 12:03:38.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:03:38.069: INFO: namespace: e2e-tests-kubectl-8bjmp, resource: bindings, ignored listing per whitelist
Feb 26 12:03:38.195: INFO: namespace e2e-tests-kubectl-8bjmp deletion completed in 6.997838937s

• [SLOW TEST:19.479 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:03:38.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Feb 26 12:03:38.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 26 12:03:38.435: INFO: stderr: ""
Feb 26 12:03:38.435: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:03:38.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rvp8m" for this suite.
Feb 26 12:03:44.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:03:44.640: INFO: namespace: e2e-tests-kubectl-rvp8m, resource: bindings, ignored listing per whitelist
Feb 26 12:03:44.656: INFO: namespace e2e-tests-kubectl-rvp8m deletion completed in 6.210935401s

• [SLOW TEST:6.460 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:03:44.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:03:44.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-kdkxk" for this suite.
Feb 26 12:03:51.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:03:51.220: INFO: namespace: e2e-tests-kubelet-test-kdkxk, resource: bindings, ignored listing per whitelist
Feb 26 12:03:51.285: INFO: namespace e2e-tests-kubelet-test-kdkxk deletion completed in 6.368547821s

• [SLOW TEST:6.629 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:03:51.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-0d92bc7a-5890-11ea-8134-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 26 12:03:51.514: INFO: Waiting up to 5m0s for pod "pod-configmaps-0d97bc34-5890-11ea-8134-0242ac110008" in namespace "e2e-tests-configmap-lb5hh" to be "success or failure"
Feb 26 12:03:51.525: INFO: Pod "pod-configmaps-0d97bc34-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.470122ms
Feb 26 12:03:53.746: INFO: Pod "pod-configmaps-0d97bc34-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231523293s
Feb 26 12:03:55.814: INFO: Pod "pod-configmaps-0d97bc34-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.300094286s
Feb 26 12:03:57.960: INFO: Pod "pod-configmaps-0d97bc34-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.44591242s
Feb 26 12:03:59.975: INFO: Pod "pod-configmaps-0d97bc34-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.46137596s
Feb 26 12:04:01.993: INFO: Pod "pod-configmaps-0d97bc34-5890-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.478904741s
STEP: Saw pod success
Feb 26 12:04:01.993: INFO: Pod "pod-configmaps-0d97bc34-5890-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:04:01.996: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-0d97bc34-5890-11ea-8134-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 26 12:04:03.099: INFO: Waiting for pod pod-configmaps-0d97bc34-5890-11ea-8134-0242ac110008 to disappear
Feb 26 12:04:03.118: INFO: Pod pod-configmaps-0d97bc34-5890-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:04:03.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-lb5hh" for this suite.
Feb 26 12:04:09.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:04:09.457: INFO: namespace: e2e-tests-configmap-lb5hh, resource: bindings, ignored listing per whitelist
Feb 26 12:04:09.492: INFO: namespace e2e-tests-configmap-lb5hh deletion completed in 6.363828123s

• [SLOW TEST:18.207 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:04:09.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Feb 26 12:04:09.739: INFO: Waiting up to 5m0s for pod "var-expansion-187487bb-5890-11ea-8134-0242ac110008" in namespace "e2e-tests-var-expansion-55c92" to be "success or failure"
Feb 26 12:04:09.783: INFO: Pod "var-expansion-187487bb-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 43.282077ms
Feb 26 12:04:11.799: INFO: Pod "var-expansion-187487bb-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058907849s
Feb 26 12:04:13.817: INFO: Pod "var-expansion-187487bb-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077302289s
Feb 26 12:04:15.838: INFO: Pod "var-expansion-187487bb-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098351426s
Feb 26 12:04:18.253: INFO: Pod "var-expansion-187487bb-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.51333745s
Feb 26 12:04:20.287: INFO: Pod "var-expansion-187487bb-5890-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.547096354s
STEP: Saw pod success
Feb 26 12:04:20.287: INFO: Pod "var-expansion-187487bb-5890-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:04:20.512: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-187487bb-5890-11ea-8134-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 26 12:04:20.682: INFO: Waiting for pod var-expansion-187487bb-5890-11ea-8134-0242ac110008 to disappear
Feb 26 12:04:20.695: INFO: Pod var-expansion-187487bb-5890-11ea-8134-0242ac110008 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:04:20.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-55c92" for this suite.
Feb 26 12:04:26.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:04:26.848: INFO: namespace: e2e-tests-var-expansion-55c92, resource: bindings, ignored listing per whitelist
Feb 26 12:04:27.199: INFO: namespace e2e-tests-var-expansion-55c92 deletion completed in 6.460181188s

• [SLOW TEST:17.707 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:04:27.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 26 12:04:27.502: INFO: Waiting up to 5m0s for pod "pod-230d0428-5890-11ea-8134-0242ac110008" in namespace "e2e-tests-emptydir-s7755" to be "success or failure"
Feb 26 12:04:27.516: INFO: Pod "pod-230d0428-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.689329ms
Feb 26 12:04:29.533: INFO: Pod "pod-230d0428-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030345995s
Feb 26 12:04:31.546: INFO: Pod "pod-230d0428-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044037808s
Feb 26 12:04:33.655: INFO: Pod "pod-230d0428-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152632704s
Feb 26 12:04:35.671: INFO: Pod "pod-230d0428-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.169075385s
Feb 26 12:04:37.691: INFO: Pod "pod-230d0428-5890-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.188401657s
STEP: Saw pod success
Feb 26 12:04:37.691: INFO: Pod "pod-230d0428-5890-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:04:37.699: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-230d0428-5890-11ea-8134-0242ac110008 container test-container: 
STEP: delete the pod
Feb 26 12:04:37.955: INFO: Waiting for pod pod-230d0428-5890-11ea-8134-0242ac110008 to disappear
Feb 26 12:04:37.965: INFO: Pod pod-230d0428-5890-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:04:37.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-s7755" for this suite.
Feb 26 12:04:44.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:04:45.069: INFO: namespace: e2e-tests-emptydir-s7755, resource: bindings, ignored listing per whitelist
Feb 26 12:04:45.073: INFO: namespace e2e-tests-emptydir-s7755 deletion completed in 7.094906158s

• [SLOW TEST:17.872 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:04:45.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-2da22df5-5890-11ea-8134-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 26 12:04:45.263: INFO: Waiting up to 5m0s for pod "pod-configmaps-2da3c2ec-5890-11ea-8134-0242ac110008" in namespace "e2e-tests-configmap-z8jzx" to be "success or failure"
Feb 26 12:04:45.275: INFO: Pod "pod-configmaps-2da3c2ec-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.282863ms
Feb 26 12:04:47.331: INFO: Pod "pod-configmaps-2da3c2ec-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067509604s
Feb 26 12:04:49.344: INFO: Pod "pod-configmaps-2da3c2ec-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080401014s
Feb 26 12:04:51.366: INFO: Pod "pod-configmaps-2da3c2ec-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102365919s
Feb 26 12:04:53.399: INFO: Pod "pod-configmaps-2da3c2ec-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135415843s
Feb 26 12:04:57.305: INFO: Pod "pod-configmaps-2da3c2ec-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.042143726s
Feb 26 12:04:59.318: INFO: Pod "pod-configmaps-2da3c2ec-5890-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.054394298s
STEP: Saw pod success
Feb 26 12:04:59.318: INFO: Pod "pod-configmaps-2da3c2ec-5890-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:04:59.324: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-2da3c2ec-5890-11ea-8134-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 26 12:04:59.667: INFO: Waiting for pod pod-configmaps-2da3c2ec-5890-11ea-8134-0242ac110008 to disappear
Feb 26 12:04:59.886: INFO: Pod pod-configmaps-2da3c2ec-5890-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:04:59.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-z8jzx" for this suite.
Feb 26 12:05:05.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:05:06.278: INFO: namespace: e2e-tests-configmap-z8jzx, resource: bindings, ignored listing per whitelist
Feb 26 12:05:06.299: INFO: namespace e2e-tests-configmap-z8jzx deletion completed in 6.393208196s

• [SLOW TEST:21.226 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:05:06.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-c78bg/configmap-test-3a651b5b-5890-11ea-8134-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 26 12:05:06.687: INFO: Waiting up to 5m0s for pod "pod-configmaps-3a67804f-5890-11ea-8134-0242ac110008" in namespace "e2e-tests-configmap-c78bg" to be "success or failure"
Feb 26 12:05:06.707: INFO: Pod "pod-configmaps-3a67804f-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.391696ms
Feb 26 12:05:08.746: INFO: Pod "pod-configmaps-3a67804f-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058657988s
Feb 26 12:05:10.830: INFO: Pod "pod-configmaps-3a67804f-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142969926s
Feb 26 12:05:13.803: INFO: Pod "pod-configmaps-3a67804f-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.116040339s
Feb 26 12:05:16.154: INFO: Pod "pod-configmaps-3a67804f-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.466586037s
Feb 26 12:05:18.256: INFO: Pod "pod-configmaps-3a67804f-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.568616297s
Feb 26 12:05:20.316: INFO: Pod "pod-configmaps-3a67804f-5890-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.628888026s
STEP: Saw pod success
Feb 26 12:05:20.316: INFO: Pod "pod-configmaps-3a67804f-5890-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:05:20.332: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3a67804f-5890-11ea-8134-0242ac110008 container env-test: 
STEP: delete the pod
Feb 26 12:05:20.541: INFO: Waiting for pod pod-configmaps-3a67804f-5890-11ea-8134-0242ac110008 to disappear
Feb 26 12:05:20.558: INFO: Pod pod-configmaps-3a67804f-5890-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:05:20.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-c78bg" for this suite.
Feb 26 12:05:26.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:05:26.826: INFO: namespace: e2e-tests-configmap-c78bg, resource: bindings, ignored listing per whitelist
Feb 26 12:05:26.915: INFO: namespace e2e-tests-configmap-c78bg deletion completed in 6.337134316s

• [SLOW TEST:20.616 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:05:26.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 26 12:05:27.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-8v5tm'
Feb 26 12:05:27.143: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 26 12:05:27.143: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Feb 26 12:05:27.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-8v5tm'
Feb 26 12:05:27.362: INFO: stderr: ""
Feb 26 12:05:27.362: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:05:27.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8v5tm" for this suite.
Feb 26 12:05:49.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:05:49.566: INFO: namespace: e2e-tests-kubectl-8v5tm, resource: bindings, ignored listing per whitelist
Feb 26 12:05:49.729: INFO: namespace e2e-tests-kubectl-8v5tm deletion completed in 22.348014197s

• [SLOW TEST:22.813 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:05:49.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Feb 26 12:05:49.924: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-wgwqw" to be "success or failure"
Feb 26 12:05:50.111: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 186.895449ms
Feb 26 12:05:52.166: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242064834s
Feb 26 12:05:54.182: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258353867s
Feb 26 12:05:56.809: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.884510963s
Feb 26 12:05:58.825: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.901013784s
Feb 26 12:06:00.839: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.915166697s
Feb 26 12:06:02.875: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.950607487s
Feb 26 12:06:04.904: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.980108566s
STEP: Saw pod success
Feb 26 12:06:04.905: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 26 12:06:04.916: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 26 12:06:05.132: INFO: Waiting for pod pod-host-path-test to disappear
Feb 26 12:06:05.143: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:06:05.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-wgwqw" for this suite.
Feb 26 12:06:11.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:06:11.318: INFO: namespace: e2e-tests-hostpath-wgwqw, resource: bindings, ignored listing per whitelist
Feb 26 12:06:11.466: INFO: namespace e2e-tests-hostpath-wgwqw deletion completed in 6.315985248s

• [SLOW TEST:21.737 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:06:11.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb 26 12:06:22.658: INFO: Successfully updated pod "labelsupdate6135b558-5890-11ea-8134-0242ac110008"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:06:24.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nzsfg" for this suite.
Feb 26 12:06:48.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:06:51.302: INFO: namespace: e2e-tests-projected-nzsfg, resource: bindings, ignored listing per whitelist
Feb 26 12:06:51.372: INFO: namespace e2e-tests-projected-nzsfg deletion completed in 26.57679434s

• [SLOW TEST:39.905 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:06:51.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 26 12:06:51.599: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78f18eef-5890-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-dv2t4" to be "success or failure"
Feb 26 12:06:51.608: INFO: Pod "downwardapi-volume-78f18eef-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.337415ms
Feb 26 12:06:53.926: INFO: Pod "downwardapi-volume-78f18eef-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.327470348s
Feb 26 12:06:55.945: INFO: Pod "downwardapi-volume-78f18eef-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346595479s
Feb 26 12:06:58.296: INFO: Pod "downwardapi-volume-78f18eef-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.697377586s
Feb 26 12:07:00.417: INFO: Pod "downwardapi-volume-78f18eef-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.818308513s
Feb 26 12:07:02.807: INFO: Pod "downwardapi-volume-78f18eef-5890-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.208392614s
STEP: Saw pod success
Feb 26 12:07:02.807: INFO: Pod "downwardapi-volume-78f18eef-5890-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:07:02.816: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-78f18eef-5890-11ea-8134-0242ac110008 container client-container: 
STEP: delete the pod
Feb 26 12:07:03.139: INFO: Waiting for pod downwardapi-volume-78f18eef-5890-11ea-8134-0242ac110008 to disappear
Feb 26 12:07:03.165: INFO: Pod downwardapi-volume-78f18eef-5890-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:07:03.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dv2t4" for this suite.
Feb 26 12:07:09.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:07:09.456: INFO: namespace: e2e-tests-projected-dv2t4, resource: bindings, ignored listing per whitelist
Feb 26 12:07:09.708: INFO: namespace e2e-tests-projected-dv2t4 deletion completed in 6.530011698s

• [SLOW TEST:18.336 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:07:09.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Feb 26 12:07:10.421: INFO: Waiting up to 5m0s for pod "var-expansion-84257e47-5890-11ea-8134-0242ac110008" in namespace "e2e-tests-var-expansion-ms62v" to be "success or failure"
Feb 26 12:07:10.465: INFO: Pod "var-expansion-84257e47-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 43.222518ms
Feb 26 12:07:12.827: INFO: Pod "var-expansion-84257e47-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.405781882s
Feb 26 12:07:14.865: INFO: Pod "var-expansion-84257e47-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.443573407s
Feb 26 12:07:16.933: INFO: Pod "var-expansion-84257e47-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.511032687s
Feb 26 12:07:18.944: INFO: Pod "var-expansion-84257e47-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.522669308s
Feb 26 12:07:20.991: INFO: Pod "var-expansion-84257e47-5890-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.569656999s
STEP: Saw pod success
Feb 26 12:07:20.992: INFO: Pod "var-expansion-84257e47-5890-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:07:21.007: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-84257e47-5890-11ea-8134-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 26 12:07:21.254: INFO: Waiting for pod var-expansion-84257e47-5890-11ea-8134-0242ac110008 to disappear
Feb 26 12:07:21.278: INFO: Pod var-expansion-84257e47-5890-11ea-8134-0242ac110008 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:07:21.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-ms62v" for this suite.
Feb 26 12:07:27.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:07:27.457: INFO: namespace: e2e-tests-var-expansion-ms62v, resource: bindings, ignored listing per whitelist
Feb 26 12:07:27.543: INFO: namespace e2e-tests-var-expansion-ms62v deletion completed in 6.254527114s

• [SLOW TEST:17.834 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:07:27.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 26 12:07:27.948: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 26 12:07:28.053: INFO: Number of nodes with available pods: 0
Feb 26 12:07:28.053: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:07:29.072: INFO: Number of nodes with available pods: 0
Feb 26 12:07:29.072: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:07:30.588: INFO: Number of nodes with available pods: 0
Feb 26 12:07:30.588: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:07:31.098: INFO: Number of nodes with available pods: 0
Feb 26 12:07:31.098: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:07:32.081: INFO: Number of nodes with available pods: 0
Feb 26 12:07:32.081: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:07:33.490: INFO: Number of nodes with available pods: 0
Feb 26 12:07:33.490: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:07:34.085: INFO: Number of nodes with available pods: 0
Feb 26 12:07:34.085: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:07:35.071: INFO: Number of nodes with available pods: 0
Feb 26 12:07:35.071: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:07:36.075: INFO: Number of nodes with available pods: 0
Feb 26 12:07:36.075: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:07:37.085: INFO: Number of nodes with available pods: 1
Feb 26 12:07:37.085: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 26 12:07:37.157: INFO: Wrong image for pod: daemon-set-jx62m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 26 12:07:38.220: INFO: Wrong image for pod: daemon-set-jx62m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 26 12:07:39.501: INFO: Wrong image for pod: daemon-set-jx62m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 26 12:07:40.282: INFO: Wrong image for pod: daemon-set-jx62m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 26 12:07:41.217: INFO: Wrong image for pod: daemon-set-jx62m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 26 12:07:42.230: INFO: Wrong image for pod: daemon-set-jx62m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 26 12:07:43.191: INFO: Wrong image for pod: daemon-set-jx62m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 26 12:07:44.186: INFO: Wrong image for pod: daemon-set-jx62m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 26 12:07:44.186: INFO: Pod daemon-set-jx62m is not available
Feb 26 12:07:45.263: INFO: Pod daemon-set-lclcm is not available
Feb 26 12:07:46.363: INFO: Pod daemon-set-lclcm is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 26 12:07:46.396: INFO: Number of nodes with available pods: 0
Feb 26 12:07:46.396: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:07:47.419: INFO: Number of nodes with available pods: 0
Feb 26 12:07:47.419: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:07:48.415: INFO: Number of nodes with available pods: 0
Feb 26 12:07:48.415: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:07:49.414: INFO: Number of nodes with available pods: 0
Feb 26 12:07:49.414: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:07:50.763: INFO: Number of nodes with available pods: 0
Feb 26 12:07:50.763: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:07:51.572: INFO: Number of nodes with available pods: 0
Feb 26 12:07:51.573: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:07:52.717: INFO: Number of nodes with available pods: 0
Feb 26 12:07:52.717: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:07:53.422: INFO: Number of nodes with available pods: 0
Feb 26 12:07:53.422: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:07:54.420: INFO: Number of nodes with available pods: 1
Feb 26 12:07:54.420: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-9j8j2, will wait for the garbage collector to delete the pods
Feb 26 12:07:54.580: INFO: Deleting DaemonSet.extensions daemon-set took: 58.000619ms
Feb 26 12:07:54.781: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.969152ms
Feb 26 12:08:12.744: INFO: Number of nodes with available pods: 0
Feb 26 12:08:12.744: INFO: Number of running nodes: 0, number of available pods: 0
Feb 26 12:08:12.754: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-9j8j2/daemonsets","resourceVersion":"22978206"},"items":null}

Feb 26 12:08:12.766: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-9j8j2/pods","resourceVersion":"22978206"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:08:12.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-9j8j2" for this suite.
Feb 26 12:08:20.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:08:21.215: INFO: namespace: e2e-tests-daemonsets-9j8j2, resource: bindings, ignored listing per whitelist
Feb 26 12:08:21.227: INFO: namespace e2e-tests-daemonsets-9j8j2 deletion completed in 8.440064355s

• [SLOW TEST:53.683 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:08:21.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 26 12:08:41.812: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 26 12:08:41.885: INFO: Pod pod-with-prestop-http-hook still exists
Feb 26 12:08:43.886: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 26 12:08:44.422: INFO: Pod pod-with-prestop-http-hook still exists
Feb 26 12:08:45.886: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 26 12:08:45.934: INFO: Pod pod-with-prestop-http-hook still exists
Feb 26 12:08:47.886: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 26 12:08:47.907: INFO: Pod pod-with-prestop-http-hook still exists
Feb 26 12:08:49.886: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 26 12:08:49.909: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:08:49.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-dsv65" for this suite.
Feb 26 12:09:13.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:09:14.003: INFO: namespace: e2e-tests-container-lifecycle-hook-dsv65, resource: bindings, ignored listing per whitelist
Feb 26 12:09:14.135: INFO: namespace e2e-tests-container-lifecycle-hook-dsv65 deletion completed in 24.180050525s

• [SLOW TEST:52.908 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:09:14.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 26 12:09:14.439: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce0e4b2a-5890-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-chrg6" to be "success or failure"
Feb 26 12:09:14.451: INFO: Pod "downwardapi-volume-ce0e4b2a-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.320485ms
Feb 26 12:09:16.532: INFO: Pod "downwardapi-volume-ce0e4b2a-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0928857s
Feb 26 12:09:18.564: INFO: Pod "downwardapi-volume-ce0e4b2a-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124642932s
Feb 26 12:09:20.723: INFO: Pod "downwardapi-volume-ce0e4b2a-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.283425236s
Feb 26 12:09:22.786: INFO: Pod "downwardapi-volume-ce0e4b2a-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.346820188s
Feb 26 12:09:24.800: INFO: Pod "downwardapi-volume-ce0e4b2a-5890-11ea-8134-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 10.360392922s
Feb 26 12:09:27.560: INFO: Pod "downwardapi-volume-ce0e4b2a-5890-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.120039398s
STEP: Saw pod success
Feb 26 12:09:27.560: INFO: Pod "downwardapi-volume-ce0e4b2a-5890-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:09:27.573: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ce0e4b2a-5890-11ea-8134-0242ac110008 container client-container: 
STEP: delete the pod
Feb 26 12:09:27.857: INFO: Waiting for pod downwardapi-volume-ce0e4b2a-5890-11ea-8134-0242ac110008 to disappear
Feb 26 12:09:27.892: INFO: Pod downwardapi-volume-ce0e4b2a-5890-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:09:27.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-chrg6" for this suite.
Feb 26 12:09:34.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:09:34.083: INFO: namespace: e2e-tests-projected-chrg6, resource: bindings, ignored listing per whitelist
Feb 26 12:09:34.203: INFO: namespace e2e-tests-projected-chrg6 deletion completed in 6.293388842s

• [SLOW TEST:20.066 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:09:34.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Feb 26 12:09:34.566: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:09:34.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fjx7t" for this suite.
Feb 26 12:09:40.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:09:40.840: INFO: namespace: e2e-tests-kubectl-fjx7t, resource: bindings, ignored listing per whitelist
Feb 26 12:09:40.920: INFO: namespace e2e-tests-kubectl-fjx7t deletion completed in 6.227543541s

• [SLOW TEST:6.717 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:09:40.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 26 12:09:41.165: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de02cfe2-5890-11ea-8134-0242ac110008" in namespace "e2e-tests-downward-api-9tdb8" to be "success or failure"
Feb 26 12:09:41.189: INFO: Pod "downwardapi-volume-de02cfe2-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 24.475881ms
Feb 26 12:09:43.242: INFO: Pod "downwardapi-volume-de02cfe2-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077796045s
Feb 26 12:09:45.320: INFO: Pod "downwardapi-volume-de02cfe2-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15579616s
Feb 26 12:09:47.778: INFO: Pod "downwardapi-volume-de02cfe2-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.613558266s
Feb 26 12:09:49.816: INFO: Pod "downwardapi-volume-de02cfe2-5890-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.651327503s
Feb 26 12:09:51.846: INFO: Pod "downwardapi-volume-de02cfe2-5890-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.681205171s
STEP: Saw pod success
Feb 26 12:09:51.846: INFO: Pod "downwardapi-volume-de02cfe2-5890-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:09:51.858: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-de02cfe2-5890-11ea-8134-0242ac110008 container client-container: 
STEP: delete the pod
Feb 26 12:09:52.888: INFO: Waiting for pod downwardapi-volume-de02cfe2-5890-11ea-8134-0242ac110008 to disappear
Feb 26 12:09:52.919: INFO: Pod downwardapi-volume-de02cfe2-5890-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:09:52.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9tdb8" for this suite.
Feb 26 12:09:59.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:09:59.132: INFO: namespace: e2e-tests-downward-api-9tdb8, resource: bindings, ignored listing per whitelist
Feb 26 12:09:59.295: INFO: namespace e2e-tests-downward-api-9tdb8 deletion completed in 6.349371539s

• [SLOW TEST:18.374 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:09:59.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0226 12:10:30.164746       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 26 12:10:30.164: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:10:30.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-4vqb4" for this suite.
Feb 26 12:10:40.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:10:40.985: INFO: namespace: e2e-tests-gc-4vqb4, resource: bindings, ignored listing per whitelist
Feb 26 12:10:40.996: INFO: namespace e2e-tests-gc-4vqb4 deletion completed in 10.824891885s

• [SLOW TEST:41.701 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:10:40.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 26 12:10:41.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:10:51.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-ms7zh" for this suite.
Feb 26 12:11:35.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:11:35.904: INFO: namespace: e2e-tests-pods-ms7zh, resource: bindings, ignored listing per whitelist
Feb 26 12:11:35.909: INFO: namespace e2e-tests-pods-ms7zh deletion completed in 44.197254663s

• [SLOW TEST:54.913 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:11:35.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-228c2ee4-5891-11ea-8134-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 26 12:11:36.149: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-228ce820-5891-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-pctfq" to be "success or failure"
Feb 26 12:11:36.235: INFO: Pod "pod-projected-secrets-228ce820-5891-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 86.314126ms
Feb 26 12:11:38.261: INFO: Pod "pod-projected-secrets-228ce820-5891-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112348229s
Feb 26 12:11:40.276: INFO: Pod "pod-projected-secrets-228ce820-5891-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127263642s
Feb 26 12:11:42.290: INFO: Pod "pod-projected-secrets-228ce820-5891-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140837878s
Feb 26 12:11:44.724: INFO: Pod "pod-projected-secrets-228ce820-5891-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.575319554s
Feb 26 12:11:46.768: INFO: Pod "pod-projected-secrets-228ce820-5891-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.619075175s
STEP: Saw pod success
Feb 26 12:11:46.769: INFO: Pod "pod-projected-secrets-228ce820-5891-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:11:46.798: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-228ce820-5891-11ea-8134-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Feb 26 12:11:46.979: INFO: Waiting for pod pod-projected-secrets-228ce820-5891-11ea-8134-0242ac110008 to disappear
Feb 26 12:11:47.088: INFO: Pod pod-projected-secrets-228ce820-5891-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:11:47.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pctfq" for this suite.
Feb 26 12:11:53.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:11:53.303: INFO: namespace: e2e-tests-projected-pctfq, resource: bindings, ignored listing per whitelist
Feb 26 12:11:53.419: INFO: namespace e2e-tests-projected-pctfq deletion completed in 6.308861308s

• [SLOW TEST:17.509 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:11:53.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:12:04.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-x68tc" for this suite.
Feb 26 12:12:12.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:12:12.252: INFO: namespace: e2e-tests-emptydir-wrapper-x68tc, resource: bindings, ignored listing per whitelist
Feb 26 12:12:12.334: INFO: namespace e2e-tests-emptydir-wrapper-x68tc deletion completed in 8.30902274s

• [SLOW TEST:18.915 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:12:12.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xl9dr
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 26 12:12:12.585: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 26 12:12:42.898: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-xl9dr PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 26 12:12:42.898: INFO: >>> kubeConfig: /root/.kube/config
I0226 12:12:43.014813       9 log.go:172] (0xc0000dcd10) (0xc0021aa960) Create stream
I0226 12:12:43.015011       9 log.go:172] (0xc0000dcd10) (0xc0021aa960) Stream added, broadcasting: 1
I0226 12:12:43.020807       9 log.go:172] (0xc0000dcd10) Reply frame received for 1
I0226 12:12:43.020852       9 log.go:172] (0xc0000dcd10) (0xc001d603c0) Create stream
I0226 12:12:43.020868       9 log.go:172] (0xc0000dcd10) (0xc001d603c0) Stream added, broadcasting: 3
I0226 12:12:43.022455       9 log.go:172] (0xc0000dcd10) Reply frame received for 3
I0226 12:12:43.022497       9 log.go:172] (0xc0000dcd10) (0xc0023b6000) Create stream
I0226 12:12:43.022513       9 log.go:172] (0xc0000dcd10) (0xc0023b6000) Stream added, broadcasting: 5
I0226 12:12:43.023681       9 log.go:172] (0xc0000dcd10) Reply frame received for 5
I0226 12:12:43.233660       9 log.go:172] (0xc0000dcd10) Data frame received for 3
I0226 12:12:43.233801       9 log.go:172] (0xc001d603c0) (3) Data frame handling
I0226 12:12:43.233886       9 log.go:172] (0xc001d603c0) (3) Data frame sent
I0226 12:12:43.388219       9 log.go:172] (0xc0000dcd10) Data frame received for 1
I0226 12:12:43.388399       9 log.go:172] (0xc0000dcd10) (0xc001d603c0) Stream removed, broadcasting: 3
I0226 12:12:43.388546       9 log.go:172] (0xc0021aa960) (1) Data frame handling
I0226 12:12:43.388597       9 log.go:172] (0xc0021aa960) (1) Data frame sent
I0226 12:12:43.388664       9 log.go:172] (0xc0000dcd10) (0xc0023b6000) Stream removed, broadcasting: 5
I0226 12:12:43.388741       9 log.go:172] (0xc0000dcd10) (0xc0021aa960) Stream removed, broadcasting: 1
I0226 12:12:43.388787       9 log.go:172] (0xc0000dcd10) Go away received
I0226 12:12:43.389258       9 log.go:172] (0xc0000dcd10) (0xc0021aa960) Stream removed, broadcasting: 1
I0226 12:12:43.389326       9 log.go:172] (0xc0000dcd10) (0xc001d603c0) Stream removed, broadcasting: 3
I0226 12:12:43.389352       9 log.go:172] (0xc0000dcd10) (0xc0023b6000) Stream removed, broadcasting: 5
Feb 26 12:12:43.389: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:12:43.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-xl9dr" for this suite.
Feb 26 12:13:13.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:13:13.671: INFO: namespace: e2e-tests-pod-network-test-xl9dr, resource: bindings, ignored listing per whitelist
Feb 26 12:13:13.674: INFO: namespace e2e-tests-pod-network-test-xl9dr deletion completed in 30.270387959s

• [SLOW TEST:61.340 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:13:13.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0226 12:13:55.948604       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 26 12:13:55.948: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:13:55.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-dr6z8" for this suite.
Feb 26 12:14:10.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:14:10.172: INFO: namespace: e2e-tests-gc-dr6z8, resource: bindings, ignored listing per whitelist
Feb 26 12:14:11.129: INFO: namespace e2e-tests-gc-dr6z8 deletion completed in 15.176006575s

• [SLOW TEST:57.453 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:14:11.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-z2r9
STEP: Creating a pod to test atomic-volume-subpath
Feb 26 12:14:15.094: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-z2r9" in namespace "e2e-tests-subpath-j74bc" to be "success or failure"
Feb 26 12:14:15.146: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Pending", Reason="", readiness=false. Elapsed: 51.658342ms
Feb 26 12:14:17.293: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198341854s
Feb 26 12:14:19.331: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.236573235s
Feb 26 12:14:21.347: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.252138884s
Feb 26 12:14:23.549: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.454648402s
Feb 26 12:14:25.984: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.88930922s
Feb 26 12:14:28.003: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.908992228s
Feb 26 12:14:30.016: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.921315437s
Feb 26 12:14:32.033: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.938455593s
Feb 26 12:14:34.051: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.956701052s
Feb 26 12:14:36.069: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Running", Reason="", readiness=false. Elapsed: 20.97459719s
Feb 26 12:14:38.083: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Running", Reason="", readiness=false. Elapsed: 22.988958199s
Feb 26 12:14:40.102: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Running", Reason="", readiness=false. Elapsed: 25.007765907s
Feb 26 12:14:42.117: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Running", Reason="", readiness=false. Elapsed: 27.022836871s
Feb 26 12:14:44.140: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Running", Reason="", readiness=false. Elapsed: 29.045022495s
Feb 26 12:14:46.158: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Running", Reason="", readiness=false. Elapsed: 31.063289767s
Feb 26 12:14:48.177: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Running", Reason="", readiness=false. Elapsed: 33.082934187s
Feb 26 12:14:50.197: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Running", Reason="", readiness=false. Elapsed: 35.10215516s
Feb 26 12:14:52.218: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Running", Reason="", readiness=false. Elapsed: 37.123793199s
Feb 26 12:14:54.249: INFO: Pod "pod-subpath-test-projected-z2r9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 39.154742919s
STEP: Saw pod success
Feb 26 12:14:54.250: INFO: Pod "pod-subpath-test-projected-z2r9" satisfied condition "success or failure"
Feb 26 12:14:54.259: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-z2r9 container test-container-subpath-projected-z2r9: 
STEP: delete the pod
Feb 26 12:14:54.434: INFO: Waiting for pod pod-subpath-test-projected-z2r9 to disappear
Feb 26 12:14:54.491: INFO: Pod pod-subpath-test-projected-z2r9 no longer exists
STEP: Deleting pod pod-subpath-test-projected-z2r9
Feb 26 12:14:54.491: INFO: Deleting pod "pod-subpath-test-projected-z2r9" in namespace "e2e-tests-subpath-j74bc"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:14:54.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-j74bc" for this suite.
Feb 26 12:15:00.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:15:00.809: INFO: namespace: e2e-tests-subpath-j74bc, resource: bindings, ignored listing per whitelist
Feb 26 12:15:00.844: INFO: namespace e2e-tests-subpath-j74bc deletion completed in 6.242411542s

• [SLOW TEST:49.713 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:15:00.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 26 12:15:01.059: INFO: Creating deployment "test-recreate-deployment"
Feb 26 12:15:01.102: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 26 12:15:01.154: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Feb 26 12:15:03.182: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 26 12:15:03.191: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718316101, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718316101, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718316101, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718316101, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 12:15:05.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718316101, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718316101, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718316101, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718316101, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 12:15:07.229: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718316101, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718316101, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718316101, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718316101, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 12:15:09.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718316101, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718316101, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718316101, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718316101, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 12:15:11.204: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 26 12:15:11.220: INFO: Updating deployment test-recreate-deployment
Feb 26 12:15:11.220: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 26 12:15:11.914: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-pszcc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pszcc/deployments/test-recreate-deployment,UID:9cb1da72-5891-11ea-a994-fa163e34d433,ResourceVersion:22979257,Generation:2,CreationTimestamp:2020-02-26 12:15:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-26 12:15:11 +0000 UTC 2020-02-26 12:15:11 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-26 12:15:11 +0000 UTC 2020-02-26 12:15:01 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb 26 12:15:12.047: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-pszcc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pszcc/replicasets/test-recreate-deployment-589c4bfd,UID:a2f8e818-5891-11ea-a994-fa163e34d433,ResourceVersion:22979256,Generation:1,CreationTimestamp:2020-02-26 12:15:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9cb1da72-5891-11ea-a994-fa163e34d433 0xc00220284f 0xc002202860}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 26 12:15:12.047: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 26 12:15:12.048: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-pszcc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-pszcc/replicasets/test-recreate-deployment-5bf7f65dc,UID:9cc0327a-5891-11ea-a994-fa163e34d433,ResourceVersion:22979245,Generation:2,CreationTimestamp:2020-02-26 12:15:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9cb1da72-5891-11ea-a994-fa163e34d433 0xc002202920 0xc002202921}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 26 12:15:12.061: INFO: Pod "test-recreate-deployment-589c4bfd-2k4n9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-2k4n9,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-pszcc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-pszcc/pods/test-recreate-deployment-589c4bfd-2k4n9,UID:a2fa439b-5891-11ea-a994-fa163e34d433,ResourceVersion:22979258,Generation:0,CreationTimestamp:2020-02-26 12:15:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd a2f8e818-5891-11ea-a994-fa163e34d433 0xc00220327f 0xc002203310}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-kmtmk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kmtmk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-kmtmk true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002203370} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002203390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 12:15:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 12:15:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-26 12:15:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 12:15:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-26 12:15:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:15:12.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-pszcc" for this suite.
Feb 26 12:15:22.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:15:23.168: INFO: namespace: e2e-tests-deployment-pszcc, resource: bindings, ignored listing per whitelist
Feb 26 12:15:23.192: INFO: namespace e2e-tests-deployment-pszcc deletion completed in 11.118950549s

• [SLOW TEST:22.348 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:15:23.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-qdbhx
Feb 26 12:15:33.438: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-qdbhx
STEP: checking the pod's current state and verifying that restartCount is present
Feb 26 12:15:33.448: INFO: Initial restart count of pod liveness-http is 0
Feb 26 12:15:53.793: INFO: Restart count of pod e2e-tests-container-probe-qdbhx/liveness-http is now 1 (20.345114025s elapsed)
Feb 26 12:16:12.256: INFO: Restart count of pod e2e-tests-container-probe-qdbhx/liveness-http is now 2 (38.808044945s elapsed)
Feb 26 12:16:32.577: INFO: Restart count of pod e2e-tests-container-probe-qdbhx/liveness-http is now 3 (59.129014803s elapsed)
Feb 26 12:16:52.791: INFO: Restart count of pod e2e-tests-container-probe-qdbhx/liveness-http is now 4 (1m19.342989066s elapsed)
Feb 26 12:17:53.460: INFO: Restart count of pod e2e-tests-container-probe-qdbhx/liveness-http is now 5 (2m20.012013528s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:17:53.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-qdbhx" for this suite.
Feb 26 12:17:59.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:17:59.791: INFO: namespace: e2e-tests-container-probe-qdbhx, resource: bindings, ignored listing per whitelist
Feb 26 12:17:59.882: INFO: namespace e2e-tests-container-probe-qdbhx deletion completed in 6.360550491s

• [SLOW TEST:156.690 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:17:59.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-tspvb
Feb 26 12:18:12.149: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-tspvb
STEP: checking the pod's current state and verifying that restartCount is present
Feb 26 12:18:12.157: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:22:13.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-tspvb" for this suite.
Feb 26 12:22:19.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:22:19.428: INFO: namespace: e2e-tests-container-probe-tspvb, resource: bindings, ignored listing per whitelist
Feb 26 12:22:19.503: INFO: namespace e2e-tests-container-probe-tspvb deletion completed in 6.303612946s

• [SLOW TEST:259.621 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:22:19.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-a22b79f5-5892-11ea-8134-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 26 12:22:19.767: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a22cee36-5892-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-479m4" to be "success or failure"
Feb 26 12:22:19.786: INFO: Pod "pod-projected-secrets-a22cee36-5892-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 18.755798ms
Feb 26 12:22:21.829: INFO: Pod "pod-projected-secrets-a22cee36-5892-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061595812s
Feb 26 12:22:24.272: INFO: Pod "pod-projected-secrets-a22cee36-5892-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.505382335s
Feb 26 12:22:26.285: INFO: Pod "pod-projected-secrets-a22cee36-5892-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.518389724s
Feb 26 12:22:28.307: INFO: Pod "pod-projected-secrets-a22cee36-5892-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.539628294s
Feb 26 12:22:30.333: INFO: Pod "pod-projected-secrets-a22cee36-5892-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.565710519s
Feb 26 12:22:32.374: INFO: Pod "pod-projected-secrets-a22cee36-5892-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.606524686s
STEP: Saw pod success
Feb 26 12:22:32.374: INFO: Pod "pod-projected-secrets-a22cee36-5892-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:22:32.384: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-a22cee36-5892-11ea-8134-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Feb 26 12:22:32.666: INFO: Waiting for pod pod-projected-secrets-a22cee36-5892-11ea-8134-0242ac110008 to disappear
Feb 26 12:22:32.679: INFO: Pod pod-projected-secrets-a22cee36-5892-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:22:32.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-479m4" for this suite.
Feb 26 12:22:40.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:22:40.839: INFO: namespace: e2e-tests-projected-479m4, resource: bindings, ignored listing per whitelist
Feb 26 12:22:40.961: INFO: namespace e2e-tests-projected-479m4 deletion completed in 8.270944819s

• [SLOW TEST:21.458 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:22:40.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 26 12:22:47.575: INFO: 10 pods remaining
Feb 26 12:22:47.575: INFO: 10 pods has nil DeletionTimestamp
Feb 26 12:22:47.575: INFO: 
Feb 26 12:22:48.642: INFO: 5 pods remaining
Feb 26 12:22:48.642: INFO: 0 pods has nil DeletionTimestamp
Feb 26 12:22:48.642: INFO: 
STEP: Gathering metrics
W0226 12:22:49.411298       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 26 12:22:49.411: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:22:49.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-kgqrq" for this suite.
Feb 26 12:23:07.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:23:07.564: INFO: namespace: e2e-tests-gc-kgqrq, resource: bindings, ignored listing per whitelist
Feb 26 12:23:07.727: INFO: namespace e2e-tests-gc-kgqrq deletion completed in 18.308073304s

• [SLOW TEST:26.766 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:23:07.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 26 12:23:07.964: INFO: Waiting up to 5m0s for pod "pod-bee742ad-5892-11ea-8134-0242ac110008" in namespace "e2e-tests-emptydir-sg8g7" to be "success or failure"
Feb 26 12:23:07.978: INFO: Pod "pod-bee742ad-5892-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.315048ms
Feb 26 12:23:11.180: INFO: Pod "pod-bee742ad-5892-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.216298766s
Feb 26 12:23:13.194: INFO: Pod "pod-bee742ad-5892-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.23000941s
Feb 26 12:23:15.319: INFO: Pod "pod-bee742ad-5892-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.354910098s
Feb 26 12:23:17.360: INFO: Pod "pod-bee742ad-5892-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.39654576s
Feb 26 12:23:19.377: INFO: Pod "pod-bee742ad-5892-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.412967689s
STEP: Saw pod success
Feb 26 12:23:19.377: INFO: Pod "pod-bee742ad-5892-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:23:19.383: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-bee742ad-5892-11ea-8134-0242ac110008 container test-container: 
STEP: delete the pod
Feb 26 12:23:19.470: INFO: Waiting for pod pod-bee742ad-5892-11ea-8134-0242ac110008 to disappear
Feb 26 12:23:19.475: INFO: Pod pod-bee742ad-5892-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:23:19.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-sg8g7" for this suite.
Feb 26 12:23:25.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:23:25.747: INFO: namespace: e2e-tests-emptydir-sg8g7, resource: bindings, ignored listing per whitelist
Feb 26 12:23:25.831: INFO: namespace e2e-tests-emptydir-sg8g7 deletion completed in 6.349748898s

• [SLOW TEST:18.104 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:23:25.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-gq92
STEP: Creating a pod to test atomic-volume-subpath
Feb 26 12:23:26.121: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-gq92" in namespace "e2e-tests-subpath-46ntj" to be "success or failure"
Feb 26 12:23:26.127: INFO: Pod "pod-subpath-test-secret-gq92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039523ms
Feb 26 12:23:28.149: INFO: Pod "pod-subpath-test-secret-gq92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027471758s
Feb 26 12:23:30.169: INFO: Pod "pod-subpath-test-secret-gq92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047242334s
Feb 26 12:23:32.214: INFO: Pod "pod-subpath-test-secret-gq92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093073923s
Feb 26 12:23:34.250: INFO: Pod "pod-subpath-test-secret-gq92": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128909382s
Feb 26 12:23:36.280: INFO: Pod "pod-subpath-test-secret-gq92": Phase="Pending", Reason="", readiness=false. Elapsed: 10.158640371s
Feb 26 12:23:38.296: INFO: Pod "pod-subpath-test-secret-gq92": Phase="Pending", Reason="", readiness=false. Elapsed: 12.174893988s
Feb 26 12:23:40.755: INFO: Pod "pod-subpath-test-secret-gq92": Phase="Pending", Reason="", readiness=false. Elapsed: 14.633104118s
Feb 26 12:23:42.764: INFO: Pod "pod-subpath-test-secret-gq92": Phase="Running", Reason="", readiness=false. Elapsed: 16.642306609s
Feb 26 12:23:44.789: INFO: Pod "pod-subpath-test-secret-gq92": Phase="Running", Reason="", readiness=false. Elapsed: 18.667758742s
Feb 26 12:23:46.818: INFO: Pod "pod-subpath-test-secret-gq92": Phase="Running", Reason="", readiness=false. Elapsed: 20.696495793s
Feb 26 12:23:48.837: INFO: Pod "pod-subpath-test-secret-gq92": Phase="Running", Reason="", readiness=false. Elapsed: 22.71592764s
Feb 26 12:23:50.888: INFO: Pod "pod-subpath-test-secret-gq92": Phase="Running", Reason="", readiness=false. Elapsed: 24.766625377s
Feb 26 12:23:52.904: INFO: Pod "pod-subpath-test-secret-gq92": Phase="Running", Reason="", readiness=false. Elapsed: 26.78269838s
Feb 26 12:23:54.922: INFO: Pod "pod-subpath-test-secret-gq92": Phase="Running", Reason="", readiness=false. Elapsed: 28.800822882s
Feb 26 12:23:56.941: INFO: Pod "pod-subpath-test-secret-gq92": Phase="Running", Reason="", readiness=false. Elapsed: 30.819290326s
Feb 26 12:23:58.959: INFO: Pod "pod-subpath-test-secret-gq92": Phase="Running", Reason="", readiness=false. Elapsed: 32.837726363s
Feb 26 12:24:00.993: INFO: Pod "pod-subpath-test-secret-gq92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.871751205s
STEP: Saw pod success
Feb 26 12:24:00.994: INFO: Pod "pod-subpath-test-secret-gq92" satisfied condition "success or failure"
Feb 26 12:24:01.020: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-gq92 container test-container-subpath-secret-gq92: 
STEP: delete the pod
Feb 26 12:24:01.205: INFO: Waiting for pod pod-subpath-test-secret-gq92 to disappear
Feb 26 12:24:01.229: INFO: Pod pod-subpath-test-secret-gq92 no longer exists
STEP: Deleting pod pod-subpath-test-secret-gq92
Feb 26 12:24:01.230: INFO: Deleting pod "pod-subpath-test-secret-gq92" in namespace "e2e-tests-subpath-46ntj"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:24:01.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-46ntj" for this suite.
Feb 26 12:24:09.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:24:09.503: INFO: namespace: e2e-tests-subpath-46ntj, resource: bindings, ignored listing per whitelist
Feb 26 12:24:09.525: INFO: namespace e2e-tests-subpath-46ntj deletion completed in 8.220362482s

• [SLOW TEST:43.693 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:24:09.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-e3cbcfe5-5892-11ea-8134-0242ac110008
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-e3cbcfe5-5892-11ea-8134-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:24:24.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jbvc4" for this suite.
Feb 26 12:24:48.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:24:48.390: INFO: namespace: e2e-tests-configmap-jbvc4, resource: bindings, ignored listing per whitelist
Feb 26 12:24:48.493: INFO: namespace e2e-tests-configmap-jbvc4 deletion completed in 24.250802909s

• [SLOW TEST:38.968 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:24:48.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 26 12:24:48.784: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 26 12:24:48.801: INFO: Waiting for terminating namespaces to be deleted...
Feb 26 12:24:48.806: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 26 12:24:48.819: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 26 12:24:48.819: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 26 12:24:48.819: INFO: 	Container coredns ready: true, restart count 0
Feb 26 12:24:48.819: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 26 12:24:48.819: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 26 12:24:48.819: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 26 12:24:48.819: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 26 12:24:48.819: INFO: 	Container weave ready: true, restart count 0
Feb 26 12:24:48.819: INFO: 	Container weave-npc ready: true, restart count 0
Feb 26 12:24:48.819: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 26 12:24:48.819: INFO: 	Container coredns ready: true, restart count 0
Feb 26 12:24:48.819: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 26 12:24:48.819: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Feb 26 12:24:48.954: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 26 12:24:48.954: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 26 12:24:48.954: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb 26 12:24:48.954: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Feb 26 12:24:48.954: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Feb 26 12:24:48.954: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb 26 12:24:48.954: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 26 12:24:48.954: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fb1b0a51-5892-11ea-8134-0242ac110008.15f6f35a7ed41268], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-qzgpf/filler-pod-fb1b0a51-5892-11ea-8134-0242ac110008 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fb1b0a51-5892-11ea-8134-0242ac110008.15f6f35b94067ffd], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fb1b0a51-5892-11ea-8134-0242ac110008.15f6f35c356099c1], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fb1b0a51-5892-11ea-8134-0242ac110008.15f6f35c5e977b7f], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f6f35cd553c95f], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:25:00.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-qzgpf" for this suite.
Feb 26 12:25:08.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:25:08.552: INFO: namespace: e2e-tests-sched-pred-qzgpf, resource: bindings, ignored listing per whitelist
Feb 26 12:25:08.601: INFO: namespace e2e-tests-sched-pred-qzgpf deletion completed in 8.301461332s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:20.106 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:25:08.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-97gh
STEP: Creating a pod to test atomic-volume-subpath
Feb 26 12:25:09.933: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-97gh" in namespace "e2e-tests-subpath-pr4pk" to be "success or failure"
Feb 26 12:25:09.944: INFO: Pod "pod-subpath-test-configmap-97gh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.822985ms
Feb 26 12:25:12.532: INFO: Pod "pod-subpath-test-configmap-97gh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.599072491s
Feb 26 12:25:14.555: INFO: Pod "pod-subpath-test-configmap-97gh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.622128109s
Feb 26 12:25:16.584: INFO: Pod "pod-subpath-test-configmap-97gh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.650711961s
Feb 26 12:25:19.104: INFO: Pod "pod-subpath-test-configmap-97gh": Phase="Pending", Reason="", readiness=false. Elapsed: 9.171405189s
Feb 26 12:25:21.117: INFO: Pod "pod-subpath-test-configmap-97gh": Phase="Pending", Reason="", readiness=false. Elapsed: 11.18372515s
Feb 26 12:25:23.205: INFO: Pod "pod-subpath-test-configmap-97gh": Phase="Pending", Reason="", readiness=false. Elapsed: 13.271830489s
Feb 26 12:25:25.224: INFO: Pod "pod-subpath-test-configmap-97gh": Phase="Pending", Reason="", readiness=false. Elapsed: 15.291297619s
Feb 26 12:25:27.438: INFO: Pod "pod-subpath-test-configmap-97gh": Phase="Pending", Reason="", readiness=false. Elapsed: 17.505372812s
Feb 26 12:25:29.452: INFO: Pod "pod-subpath-test-configmap-97gh": Phase="Running", Reason="", readiness=false. Elapsed: 19.518851594s
Feb 26 12:25:31.477: INFO: Pod "pod-subpath-test-configmap-97gh": Phase="Running", Reason="", readiness=false. Elapsed: 21.543810961s
Feb 26 12:25:33.514: INFO: Pod "pod-subpath-test-configmap-97gh": Phase="Running", Reason="", readiness=false. Elapsed: 23.58076846s
Feb 26 12:25:35.530: INFO: Pod "pod-subpath-test-configmap-97gh": Phase="Running", Reason="", readiness=false. Elapsed: 25.59736561s
Feb 26 12:25:37.548: INFO: Pod "pod-subpath-test-configmap-97gh": Phase="Running", Reason="", readiness=false. Elapsed: 27.6150189s
Feb 26 12:25:39.563: INFO: Pod "pod-subpath-test-configmap-97gh": Phase="Running", Reason="", readiness=false. Elapsed: 29.629685115s
Feb 26 12:25:41.594: INFO: Pod "pod-subpath-test-configmap-97gh": Phase="Running", Reason="", readiness=false. Elapsed: 31.661277271s
Feb 26 12:25:43.663: INFO: Pod "pod-subpath-test-configmap-97gh": Phase="Running", Reason="", readiness=false. Elapsed: 33.729874398s
Feb 26 12:25:45.690: INFO: Pod "pod-subpath-test-configmap-97gh": Phase="Running", Reason="", readiness=false. Elapsed: 35.756724046s
Feb 26 12:25:47.717: INFO: Pod "pod-subpath-test-configmap-97gh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.783457172s
STEP: Saw pod success
Feb 26 12:25:47.717: INFO: Pod "pod-subpath-test-configmap-97gh" satisfied condition "success or failure"
Feb 26 12:25:47.736: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-97gh container test-container-subpath-configmap-97gh: 
STEP: delete the pod
Feb 26 12:25:47.874: INFO: Waiting for pod pod-subpath-test-configmap-97gh to disappear
Feb 26 12:25:48.013: INFO: Pod pod-subpath-test-configmap-97gh no longer exists
STEP: Deleting pod pod-subpath-test-configmap-97gh
Feb 26 12:25:48.013: INFO: Deleting pod "pod-subpath-test-configmap-97gh" in namespace "e2e-tests-subpath-pr4pk"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:25:48.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-pr4pk" for this suite.
Feb 26 12:25:54.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:25:54.384: INFO: namespace: e2e-tests-subpath-pr4pk, resource: bindings, ignored listing per whitelist
Feb 26 12:25:54.391: INFO: namespace e2e-tests-subpath-pr4pk deletion completed in 6.355733154s

• [SLOW TEST:45.787 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:25:54.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Feb 26 12:25:54.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ls26d'
Feb 26 12:25:56.758: INFO: stderr: ""
Feb 26 12:25:56.758: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Feb 26 12:25:57.778: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:25:57.778: INFO: Found 0 / 1
Feb 26 12:25:59.313: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:25:59.314: INFO: Found 0 / 1
Feb 26 12:25:59.775: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:25:59.775: INFO: Found 0 / 1
Feb 26 12:26:00.776: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:26:00.776: INFO: Found 0 / 1
Feb 26 12:26:01.772: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:26:01.772: INFO: Found 0 / 1
Feb 26 12:26:03.555: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:26:03.555: INFO: Found 0 / 1
Feb 26 12:26:03.901: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:26:03.901: INFO: Found 0 / 1
Feb 26 12:26:04.899: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:26:04.899: INFO: Found 0 / 1
Feb 26 12:26:05.771: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:26:05.771: INFO: Found 0 / 1
Feb 26 12:26:06.780: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:26:06.781: INFO: Found 1 / 1
Feb 26 12:26:06.781: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 26 12:26:06.790: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:26:06.790: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb 26 12:26:06.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-25knq redis-master --namespace=e2e-tests-kubectl-ls26d'
Feb 26 12:26:07.002: INFO: stderr: ""
Feb 26 12:26:07.002: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 26 Feb 12:26:05.958 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 Feb 12:26:05.958 # Server started, Redis version 3.2.12\n1:M 26 Feb 12:26:05.958 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 Feb 12:26:05.959 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb 26 12:26:07.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-25knq redis-master --namespace=e2e-tests-kubectl-ls26d --tail=1'
Feb 26 12:26:07.153: INFO: stderr: ""
Feb 26 12:26:07.153: INFO: stdout: "1:M 26 Feb 12:26:05.959 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb 26 12:26:07.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-25knq redis-master --namespace=e2e-tests-kubectl-ls26d --limit-bytes=1'
Feb 26 12:26:07.275: INFO: stderr: ""
Feb 26 12:26:07.275: INFO: stdout: " "
STEP: exposing timestamps
Feb 26 12:26:07.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-25knq redis-master --namespace=e2e-tests-kubectl-ls26d --tail=1 --timestamps'
Feb 26 12:26:07.387: INFO: stderr: ""
Feb 26 12:26:07.387: INFO: stdout: "2020-02-26T12:26:05.960074488Z 1:M 26 Feb 12:26:05.959 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb 26 12:26:09.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-25knq redis-master --namespace=e2e-tests-kubectl-ls26d --since=1s'
Feb 26 12:26:10.045: INFO: stderr: ""
Feb 26 12:26:10.045: INFO: stdout: ""
Feb 26 12:26:10.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-25knq redis-master --namespace=e2e-tests-kubectl-ls26d --since=24h'
Feb 26 12:26:10.206: INFO: stderr: ""
Feb 26 12:26:10.207: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 26 Feb 12:26:05.958 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 Feb 12:26:05.958 # Server started, Redis version 3.2.12\n1:M 26 Feb 12:26:05.958 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 Feb 12:26:05.959 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Feb 26 12:26:10.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ls26d'
Feb 26 12:26:10.352: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 26 12:26:10.353: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb 26 12:26:10.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-ls26d'
Feb 26 12:26:10.742: INFO: stderr: "No resources found.\n"
Feb 26 12:26:10.742: INFO: stdout: ""
Feb 26 12:26:10.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-ls26d -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 26 12:26:11.020: INFO: stderr: ""
Feb 26 12:26:11.020: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:26:11.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ls26d" for this suite.
Feb 26 12:26:33.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:26:34.280: INFO: namespace: e2e-tests-kubectl-ls26d, resource: bindings, ignored listing per whitelist
Feb 26 12:26:34.282: INFO: namespace e2e-tests-kubectl-ls26d deletion completed in 23.244804829s

• [SLOW TEST:39.891 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:26:34.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-3a0c14f2-5893-11ea-8134-0242ac110008
STEP: Creating secret with name secret-projected-all-test-volume-3a0c14a6-5893-11ea-8134-0242ac110008
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb 26 12:26:34.594: INFO: Waiting up to 5m0s for pod "projected-volume-3a0c11bd-5893-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-wx78r" to be "success or failure"
Feb 26 12:26:34.619: INFO: Pod "projected-volume-3a0c11bd-5893-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 24.955385ms
Feb 26 12:26:36.634: INFO: Pod "projected-volume-3a0c11bd-5893-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039749947s
Feb 26 12:26:38.649: INFO: Pod "projected-volume-3a0c11bd-5893-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054713015s
Feb 26 12:26:40.710: INFO: Pod "projected-volume-3a0c11bd-5893-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115718869s
Feb 26 12:26:42.728: INFO: Pod "projected-volume-3a0c11bd-5893-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134401618s
Feb 26 12:26:44.740: INFO: Pod "projected-volume-3a0c11bd-5893-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.146009103s
STEP: Saw pod success
Feb 26 12:26:44.740: INFO: Pod "projected-volume-3a0c11bd-5893-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:26:44.746: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-3a0c11bd-5893-11ea-8134-0242ac110008 container projected-all-volume-test: 
STEP: delete the pod
Feb 26 12:26:44.810: INFO: Waiting for pod projected-volume-3a0c11bd-5893-11ea-8134-0242ac110008 to disappear
Feb 26 12:26:44.834: INFO: Pod projected-volume-3a0c11bd-5893-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:26:44.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wx78r" for this suite.
Feb 26 12:26:51.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:26:51.117: INFO: namespace: e2e-tests-projected-wx78r, resource: bindings, ignored listing per whitelist
Feb 26 12:26:51.295: INFO: namespace e2e-tests-projected-wx78r deletion completed in 6.454532095s

• [SLOW TEST:17.013 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:26:51.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 26 12:26:52.783: INFO: Pod name wrapped-volume-race-44ded211-5893-11ea-8134-0242ac110008: Found 0 pods out of 5
Feb 26 12:26:57.817: INFO: Pod name wrapped-volume-race-44ded211-5893-11ea-8134-0242ac110008: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-44ded211-5893-11ea-8134-0242ac110008 in namespace e2e-tests-emptydir-wrapper-4lmjw, will wait for the garbage collector to delete the pods
Feb 26 12:28:40.005: INFO: Deleting ReplicationController wrapped-volume-race-44ded211-5893-11ea-8134-0242ac110008 took: 42.992039ms
Feb 26 12:28:40.406: INFO: Terminating ReplicationController wrapped-volume-race-44ded211-5893-11ea-8134-0242ac110008 pods took: 401.140452ms
STEP: Creating RC which spawns configmap-volume pods
Feb 26 12:29:32.911: INFO: Pod name wrapped-volume-race-a44b8cb0-5893-11ea-8134-0242ac110008: Found 0 pods out of 5
Feb 26 12:29:37.943: INFO: Pod name wrapped-volume-race-a44b8cb0-5893-11ea-8134-0242ac110008: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-a44b8cb0-5893-11ea-8134-0242ac110008 in namespace e2e-tests-emptydir-wrapper-4lmjw, will wait for the garbage collector to delete the pods
Feb 26 12:31:20.095: INFO: Deleting ReplicationController wrapped-volume-race-a44b8cb0-5893-11ea-8134-0242ac110008 took: 32.292055ms
Feb 26 12:31:20.496: INFO: Terminating ReplicationController wrapped-volume-race-a44b8cb0-5893-11ea-8134-0242ac110008 pods took: 400.661766ms
STEP: Creating RC which spawns configmap-volume pods
Feb 26 12:32:03.738: INFO: Pod name wrapped-volume-race-fe2c3acb-5893-11ea-8134-0242ac110008: Found 0 pods out of 5
Feb 26 12:32:08.794: INFO: Pod name wrapped-volume-race-fe2c3acb-5893-11ea-8134-0242ac110008: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-fe2c3acb-5893-11ea-8134-0242ac110008 in namespace e2e-tests-emptydir-wrapper-4lmjw, will wait for the garbage collector to delete the pods
Feb 26 12:33:52.955: INFO: Deleting ReplicationController wrapped-volume-race-fe2c3acb-5893-11ea-8134-0242ac110008 took: 27.373004ms
Feb 26 12:33:53.256: INFO: Terminating ReplicationController wrapped-volume-race-fe2c3acb-5893-11ea-8134-0242ac110008 pods took: 301.114014ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:34:41.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-4lmjw" for this suite.
Feb 26 12:34:51.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:34:51.878: INFO: namespace: e2e-tests-emptydir-wrapper-4lmjw, resource: bindings, ignored listing per whitelist
Feb 26 12:34:51.916: INFO: namespace e2e-tests-emptydir-wrapper-4lmjw deletion completed in 10.225283697s

• [SLOW TEST:480.621 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:34:51.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 26 12:34:52.257: INFO: Waiting up to 5m0s for pod "pod-62b1cd79-5894-11ea-8134-0242ac110008" in namespace "e2e-tests-emptydir-sqggg" to be "success or failure"
Feb 26 12:34:52.267: INFO: Pod "pod-62b1cd79-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.974552ms
Feb 26 12:34:54.295: INFO: Pod "pod-62b1cd79-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037983309s
Feb 26 12:34:57.332: INFO: Pod "pod-62b1cd79-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.075205284s
Feb 26 12:34:59.351: INFO: Pod "pod-62b1cd79-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.094239517s
Feb 26 12:35:02.275: INFO: Pod "pod-62b1cd79-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.017892167s
Feb 26 12:35:04.548: INFO: Pod "pod-62b1cd79-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.291449646s
Feb 26 12:35:06.768: INFO: Pod "pod-62b1cd79-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.511026477s
Feb 26 12:35:08.782: INFO: Pod "pod-62b1cd79-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.525306819s
Feb 26 12:35:10.881: INFO: Pod "pod-62b1cd79-5894-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.62366052s
STEP: Saw pod success
Feb 26 12:35:10.881: INFO: Pod "pod-62b1cd79-5894-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:35:10.916: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-62b1cd79-5894-11ea-8134-0242ac110008 container test-container: 
STEP: delete the pod
Feb 26 12:35:11.207: INFO: Waiting for pod pod-62b1cd79-5894-11ea-8134-0242ac110008 to disappear
Feb 26 12:35:11.228: INFO: Pod pod-62b1cd79-5894-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:35:11.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-sqggg" for this suite.
Feb 26 12:35:17.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:35:17.556: INFO: namespace: e2e-tests-emptydir-sqggg, resource: bindings, ignored listing per whitelist
Feb 26 12:35:17.556: INFO: namespace e2e-tests-emptydir-sqggg deletion completed in 6.322275684s

• [SLOW TEST:25.640 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:35:17.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-71e4f878-5894-11ea-8134-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 26 12:35:17.806: INFO: Waiting up to 5m0s for pod "pod-configmaps-71ec9de9-5894-11ea-8134-0242ac110008" in namespace "e2e-tests-configmap-tmq9h" to be "success or failure"
Feb 26 12:35:17.817: INFO: Pod "pod-configmaps-71ec9de9-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.739833ms
Feb 26 12:35:19.919: INFO: Pod "pod-configmaps-71ec9de9-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112516523s
Feb 26 12:35:21.943: INFO: Pod "pod-configmaps-71ec9de9-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136779415s
Feb 26 12:35:24.211: INFO: Pod "pod-configmaps-71ec9de9-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.404006837s
Feb 26 12:35:26.228: INFO: Pod "pod-configmaps-71ec9de9-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.421091217s
Feb 26 12:35:28.646: INFO: Pod "pod-configmaps-71ec9de9-5894-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.839761111s
STEP: Saw pod success
Feb 26 12:35:28.647: INFO: Pod "pod-configmaps-71ec9de9-5894-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:35:28.737: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-71ec9de9-5894-11ea-8134-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 26 12:35:28.886: INFO: Waiting for pod pod-configmaps-71ec9de9-5894-11ea-8134-0242ac110008 to disappear
Feb 26 12:35:28.894: INFO: Pod pod-configmaps-71ec9de9-5894-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:35:28.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-tmq9h" for this suite.
Feb 26 12:35:35.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:35:35.207: INFO: namespace: e2e-tests-configmap-tmq9h, resource: bindings, ignored listing per whitelist
Feb 26 12:35:35.244: INFO: namespace e2e-tests-configmap-tmq9h deletion completed in 6.190496416s

• [SLOW TEST:17.687 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:35:35.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-7c7aa20e-5894-11ea-8134-0242ac110008
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:35:49.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-gqbzk" for this suite.
Feb 26 12:36:13.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:36:13.826: INFO: namespace: e2e-tests-configmap-gqbzk, resource: bindings, ignored listing per whitelist
Feb 26 12:36:13.922: INFO: namespace e2e-tests-configmap-gqbzk deletion completed in 24.24918842s

• [SLOW TEST:38.678 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:36:13.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-938a3c0e-5894-11ea-8134-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 26 12:36:14.225: INFO: Waiting up to 5m0s for pod "pod-secrets-938c492b-5894-11ea-8134-0242ac110008" in namespace "e2e-tests-secrets-jkwfw" to be "success or failure"
Feb 26 12:36:14.245: INFO: Pod "pod-secrets-938c492b-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 20.017377ms
Feb 26 12:36:16.275: INFO: Pod "pod-secrets-938c492b-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050073127s
Feb 26 12:36:18.291: INFO: Pod "pod-secrets-938c492b-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066255604s
Feb 26 12:36:20.610: INFO: Pod "pod-secrets-938c492b-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.384536868s
Feb 26 12:36:22.641: INFO: Pod "pod-secrets-938c492b-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.415905027s
Feb 26 12:36:24.666: INFO: Pod "pod-secrets-938c492b-5894-11ea-8134-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 10.441132206s
Feb 26 12:36:26.680: INFO: Pod "pod-secrets-938c492b-5894-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.45451928s
STEP: Saw pod success
Feb 26 12:36:26.680: INFO: Pod "pod-secrets-938c492b-5894-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:36:26.683: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-938c492b-5894-11ea-8134-0242ac110008 container secret-env-test: 
STEP: delete the pod
Feb 26 12:36:27.112: INFO: Waiting for pod pod-secrets-938c492b-5894-11ea-8134-0242ac110008 to disappear
Feb 26 12:36:27.146: INFO: Pod pod-secrets-938c492b-5894-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:36:27.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-jkwfw" for this suite.
Feb 26 12:36:33.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:36:33.422: INFO: namespace: e2e-tests-secrets-jkwfw, resource: bindings, ignored listing per whitelist
Feb 26 12:36:33.507: INFO: namespace e2e-tests-secrets-jkwfw deletion completed in 6.342784712s

• [SLOW TEST:19.584 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:36:33.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 26 12:36:33.819: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f398ee8-5894-11ea-8134-0242ac110008" in namespace "e2e-tests-downward-api-8tfzc" to be "success or failure"
Feb 26 12:36:33.955: INFO: Pod "downwardapi-volume-9f398ee8-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 136.360866ms
Feb 26 12:36:35.972: INFO: Pod "downwardapi-volume-9f398ee8-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152663579s
Feb 26 12:36:37.978: INFO: Pod "downwardapi-volume-9f398ee8-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159477s
Feb 26 12:36:42.082: INFO: Pod "downwardapi-volume-9f398ee8-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.262879369s
Feb 26 12:36:44.099: INFO: Pod "downwardapi-volume-9f398ee8-5894-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.280283038s
Feb 26 12:36:46.114: INFO: Pod "downwardapi-volume-9f398ee8-5894-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.294845253s
STEP: Saw pod success
Feb 26 12:36:46.114: INFO: Pod "downwardapi-volume-9f398ee8-5894-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:36:46.129: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9f398ee8-5894-11ea-8134-0242ac110008 container client-container: 
STEP: delete the pod
Feb 26 12:36:47.855: INFO: Waiting for pod downwardapi-volume-9f398ee8-5894-11ea-8134-0242ac110008 to disappear
Feb 26 12:36:47.888: INFO: Pod downwardapi-volume-9f398ee8-5894-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:36:47.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-8tfzc" for this suite.
Feb 26 12:36:54.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:36:54.173: INFO: namespace: e2e-tests-downward-api-8tfzc, resource: bindings, ignored listing per whitelist
Feb 26 12:36:54.263: INFO: namespace e2e-tests-downward-api-8tfzc deletion completed in 6.357905314s

• [SLOW TEST:20.756 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:36:54.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 26 12:36:54.630: INFO: Number of nodes with available pods: 0
Feb 26 12:36:54.630: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:36:55.664: INFO: Number of nodes with available pods: 0
Feb 26 12:36:55.664: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:36:56.713: INFO: Number of nodes with available pods: 0
Feb 26 12:36:56.713: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:36:57.655: INFO: Number of nodes with available pods: 0
Feb 26 12:36:57.655: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:36:58.687: INFO: Number of nodes with available pods: 0
Feb 26 12:36:58.687: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:00.251: INFO: Number of nodes with available pods: 0
Feb 26 12:37:00.251: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:00.713: INFO: Number of nodes with available pods: 0
Feb 26 12:37:00.713: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:01.656: INFO: Number of nodes with available pods: 0
Feb 26 12:37:01.656: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:02.663: INFO: Number of nodes with available pods: 0
Feb 26 12:37:02.663: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:03.855: INFO: Number of nodes with available pods: 0
Feb 26 12:37:03.855: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:04.699: INFO: Number of nodes with available pods: 0
Feb 26 12:37:04.699: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:05.704: INFO: Number of nodes with available pods: 1
Feb 26 12:37:05.704: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 26 12:37:05.875: INFO: Number of nodes with available pods: 0
Feb 26 12:37:05.875: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:06.904: INFO: Number of nodes with available pods: 0
Feb 26 12:37:06.904: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:07.902: INFO: Number of nodes with available pods: 0
Feb 26 12:37:07.902: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:09.035: INFO: Number of nodes with available pods: 0
Feb 26 12:37:09.035: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:09.907: INFO: Number of nodes with available pods: 0
Feb 26 12:37:09.907: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:10.917: INFO: Number of nodes with available pods: 0
Feb 26 12:37:10.917: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:11.904: INFO: Number of nodes with available pods: 0
Feb 26 12:37:11.904: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:12.904: INFO: Number of nodes with available pods: 0
Feb 26 12:37:12.904: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:14.011: INFO: Number of nodes with available pods: 0
Feb 26 12:37:14.011: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:14.910: INFO: Number of nodes with available pods: 0
Feb 26 12:37:14.910: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:15.895: INFO: Number of nodes with available pods: 0
Feb 26 12:37:15.895: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:16.918: INFO: Number of nodes with available pods: 0
Feb 26 12:37:16.918: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:17.899: INFO: Number of nodes with available pods: 0
Feb 26 12:37:17.899: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:20.103: INFO: Number of nodes with available pods: 0
Feb 26 12:37:20.103: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:20.902: INFO: Number of nodes with available pods: 0
Feb 26 12:37:20.902: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:21.905: INFO: Number of nodes with available pods: 0
Feb 26 12:37:21.905: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 12:37:22.903: INFO: Number of nodes with available pods: 1
Feb 26 12:37:22.903: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-lqgr6, will wait for the garbage collector to delete the pods
Feb 26 12:37:22.999: INFO: Deleting DaemonSet.extensions daemon-set took: 30.171252ms
Feb 26 12:37:23.099: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.509036ms
Feb 26 12:37:32.682: INFO: Number of nodes with available pods: 0
Feb 26 12:37:32.682: INFO: Number of running nodes: 0, number of available pods: 0
Feb 26 12:37:32.693: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-lqgr6/daemonsets","resourceVersion":"22981863"},"items":null}

Feb 26 12:37:32.695: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-lqgr6/pods","resourceVersion":"22981863"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:37:32.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-lqgr6" for this suite.
Feb 26 12:37:40.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:37:40.880: INFO: namespace: e2e-tests-daemonsets-lqgr6, resource: bindings, ignored listing per whitelist
Feb 26 12:37:40.914: INFO: namespace e2e-tests-daemonsets-lqgr6 deletion completed in 8.202754857s

• [SLOW TEST:46.650 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:37:40.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 26 12:37:51.270: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-c7643a77-5894-11ea-8134-0242ac110008,GenerateName:,Namespace:e2e-tests-events-4nfxh,SelfLink:/api/v1/namespaces/e2e-tests-events-4nfxh/pods/send-events-c7643a77-5894-11ea-8134-0242ac110008,UID:c765daf2-5894-11ea-a994-fa163e34d433,ResourceVersion:22981914,Generation:0,CreationTimestamp:2020-02-26 12:37:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 186046270,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4ndld {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4ndld,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-4ndld true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023f8570} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023f8590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 12:37:41 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 12:37:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 12:37:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 12:37:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-26 12:37:41 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-26 12:37:49 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://6052cb00affe03e49aa00d39dc9330a069d49efcdb962febc993e3d2527a9bff}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb 26 12:37:53.283: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 26 12:37:55.294: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:37:55.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-4nfxh" for this suite.
Feb 26 12:38:35.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:38:35.479: INFO: namespace: e2e-tests-events-4nfxh, resource: bindings, ignored listing per whitelist
Feb 26 12:38:35.586: INFO: namespace e2e-tests-events-4nfxh deletion completed in 40.187080132s

• [SLOW TEST:54.672 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:38:35.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 26 12:38:35.870: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb 26 12:38:40.916: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 26 12:38:46.936: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb 26 12:38:48.955: INFO: Creating deployment "test-rollover-deployment"
Feb 26 12:38:48.997: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb 26 12:38:52.185: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb 26 12:38:52.512: INFO: Ensure that both replica sets have 1 created replica
Feb 26 12:38:52.739: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb 26 12:38:52.872: INFO: Updating deployment test-rollover-deployment
Feb 26 12:38:52.872: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb 26 12:38:55.306: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb 26 12:38:55.617: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb 26 12:38:55.776: INFO: all replica sets need to contain the pod-template-hash label
Feb 26 12:38:55.776: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317535, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 12:38:58.069: INFO: all replica sets need to contain the pod-template-hash label
Feb 26 12:38:58.070: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317535, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 12:38:59.882: INFO: all replica sets need to contain the pod-template-hash label
Feb 26 12:38:59.882: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317535, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 12:39:03.685: INFO: all replica sets need to contain the pod-template-hash label
Feb 26 12:39:03.685: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317535, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 12:39:05.034: INFO: all replica sets need to contain the pod-template-hash label
Feb 26 12:39:05.034: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317535, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 12:39:05.951: INFO: all replica sets need to contain the pod-template-hash label
Feb 26 12:39:05.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317535, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 12:39:07.898: INFO: all replica sets need to contain the pod-template-hash label
Feb 26 12:39:07.898: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317547, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 12:39:09.961: INFO: all replica sets need to contain the pod-template-hash label
Feb 26 12:39:09.961: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317547, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 12:39:11.833: INFO: all replica sets need to contain the pod-template-hash label
Feb 26 12:39:11.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317547, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 12:39:13.810: INFO: all replica sets need to contain the pod-template-hash label
Feb 26 12:39:13.810: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317547, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 12:39:15.808: INFO: all replica sets need to contain the pod-template-hash label
Feb 26 12:39:15.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317547, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718317529, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 12:39:17.908: INFO: 
Feb 26 12:39:17.908: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 26 12:39:17.944: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-xqr9q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xqr9q/deployments/test-rollover-deployment,UID:efc9f963-5894-11ea-a994-fa163e34d433,ResourceVersion:22982097,Generation:2,CreationTimestamp:2020-02-26 12:38:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-26 12:38:49 +0000 UTC 2020-02-26 12:38:49 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-26 12:39:17 +0000 UTC 2020-02-26 12:38:49 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 26 12:39:17.953: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-xqr9q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xqr9q/replicasets/test-rollover-deployment-5b8479fdb6,UID:f2199cb9-5894-11ea-a994-fa163e34d433,ResourceVersion:22982088,Generation:2,CreationTimestamp:2020-02-26 12:38:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment efc9f963-5894-11ea-a994-fa163e34d433 0xc001573d07 0xc001573d08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 26 12:39:17.953: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb 26 12:39:17.953: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-xqr9q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xqr9q/replicasets/test-rollover-controller,UID:e7f60c79-5894-11ea-a994-fa163e34d433,ResourceVersion:22982096,Generation:2,CreationTimestamp:2020-02-26 12:38:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment efc9f963-5894-11ea-a994-fa163e34d433 0xc001573b4f 0xc001573b60}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 26 12:39:17.954: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-xqr9q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xqr9q/replicasets/test-rollover-deployment-58494b7559,UID:efd4242a-5894-11ea-a994-fa163e34d433,ResourceVersion:22982049,Generation:2,CreationTimestamp:2020-02-26 12:38:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment efc9f963-5894-11ea-a994-fa163e34d433 0xc001573c37 0xc001573c38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 26 12:39:18.036: INFO: Pod "test-rollover-deployment-5b8479fdb6-mmg7k" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-mmg7k,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-xqr9q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xqr9q/pods/test-rollover-deployment-5b8479fdb6-mmg7k,UID:f29aa0e9-5894-11ea-a994-fa163e34d433,ResourceVersion:22982073,Generation:0,CreationTimestamp:2020-02-26 12:38:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 f2199cb9-5894-11ea-a994-fa163e34d433 0xc000d1fb37 0xc000d1fb38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gsnxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gsnxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-gsnxj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d1fea0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d1fed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 12:38:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 12:39:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 12:39:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 12:38:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-26 12:38:55 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-26 12:39:06 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://279f751220d0d2f14e908c869eea466c8b6a40eebd73bd4d6694b62cdcf92f72}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:39:18.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-xqr9q" for this suite.
Feb 26 12:39:28.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:39:28.147: INFO: namespace: e2e-tests-deployment-xqr9q, resource: bindings, ignored listing per whitelist
Feb 26 12:39:28.272: INFO: namespace e2e-tests-deployment-xqr9q deletion completed in 10.224182184s

• [SLOW TEST:52.685 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:39:28.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-mq5fj
Feb 26 12:39:40.699: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-mq5fj
STEP: checking the pod's current state and verifying that restartCount is present
Feb 26 12:39:40.704: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:43:41.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-mq5fj" for this suite.
Feb 26 12:43:49.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:43:49.644: INFO: namespace: e2e-tests-container-probe-mq5fj, resource: bindings, ignored listing per whitelist
Feb 26 12:43:49.688: INFO: namespace e2e-tests-container-probe-mq5fj deletion completed in 8.211879504s

• [SLOW TEST:261.415 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:43:49.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 26 12:43:50.075: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Feb 26 12:43:50.092: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-2dl9k/daemonsets","resourceVersion":"22982476"},"items":null}

Feb 26 12:43:50.102: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-2dl9k/pods","resourceVersion":"22982476"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:43:50.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-2dl9k" for this suite.
Feb 26 12:43:56.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:43:56.197: INFO: namespace: e2e-tests-daemonsets-2dl9k, resource: bindings, ignored listing per whitelist
Feb 26 12:43:56.349: INFO: namespace e2e-tests-daemonsets-2dl9k deletion completed in 6.229657056s

S [SKIPPING] [6.660 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Feb 26 12:43:50.075: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:43:56.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:44:09.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-pg9fc" for this suite.
Feb 26 12:44:33.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:44:34.018: INFO: namespace: e2e-tests-replication-controller-pg9fc, resource: bindings, ignored listing per whitelist
Feb 26 12:44:34.077: INFO: namespace e2e-tests-replication-controller-pg9fc deletion completed in 24.331864143s

• [SLOW TEST:37.727 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:44:34.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 26 12:44:34.273: INFO: Waiting up to 5m0s for pod "pod-bd9a5afa-5895-11ea-8134-0242ac110008" in namespace "e2e-tests-emptydir-b2vz9" to be "success or failure"
Feb 26 12:44:34.393: INFO: Pod "pod-bd9a5afa-5895-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 120.225553ms
Feb 26 12:44:36.409: INFO: Pod "pod-bd9a5afa-5895-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135875635s
Feb 26 12:44:38.423: INFO: Pod "pod-bd9a5afa-5895-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149811887s
Feb 26 12:44:40.455: INFO: Pod "pod-bd9a5afa-5895-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182184881s
Feb 26 12:44:42.943: INFO: Pod "pod-bd9a5afa-5895-11ea-8134-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 8.670306289s
Feb 26 12:44:44.955: INFO: Pod "pod-bd9a5afa-5895-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.682446471s
STEP: Saw pod success
Feb 26 12:44:44.956: INFO: Pod "pod-bd9a5afa-5895-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:44:44.959: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-bd9a5afa-5895-11ea-8134-0242ac110008 container test-container: 
STEP: delete the pod
Feb 26 12:44:46.542: INFO: Waiting for pod pod-bd9a5afa-5895-11ea-8134-0242ac110008 to disappear
Feb 26 12:44:46.559: INFO: Pod pod-bd9a5afa-5895-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:44:46.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-b2vz9" for this suite.
Feb 26 12:44:52.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:44:52.704: INFO: namespace: e2e-tests-emptydir-b2vz9, resource: bindings, ignored listing per whitelist
Feb 26 12:44:52.834: INFO: namespace e2e-tests-emptydir-b2vz9 deletion completed in 6.258796902s

• [SLOW TEST:18.756 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:44:52.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 26 12:44:53.265: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c8deca29-5895-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-wqpwf" to be "success or failure"
Feb 26 12:44:53.297: INFO: Pod "downwardapi-volume-c8deca29-5895-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 31.695645ms
Feb 26 12:44:55.313: INFO: Pod "downwardapi-volume-c8deca29-5895-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047689329s
Feb 26 12:44:57.330: INFO: Pod "downwardapi-volume-c8deca29-5895-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064519542s
Feb 26 12:44:59.349: INFO: Pod "downwardapi-volume-c8deca29-5895-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084007212s
Feb 26 12:45:01.364: INFO: Pod "downwardapi-volume-c8deca29-5895-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098948982s
Feb 26 12:45:03.374: INFO: Pod "downwardapi-volume-c8deca29-5895-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108672263s
STEP: Saw pod success
Feb 26 12:45:03.374: INFO: Pod "downwardapi-volume-c8deca29-5895-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:45:03.377: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c8deca29-5895-11ea-8134-0242ac110008 container client-container: 
STEP: delete the pod
Feb 26 12:45:05.493: INFO: Waiting for pod downwardapi-volume-c8deca29-5895-11ea-8134-0242ac110008 to disappear
Feb 26 12:45:05.505: INFO: Pod downwardapi-volume-c8deca29-5895-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:45:05.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wqpwf" for this suite.
Feb 26 12:45:11.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:45:11.608: INFO: namespace: e2e-tests-projected-wqpwf, resource: bindings, ignored listing per whitelist
Feb 26 12:45:11.711: INFO: namespace e2e-tests-projected-wqpwf deletion completed in 6.197816334s

• [SLOW TEST:18.877 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:45:11.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb 26 12:45:22.536: INFO: Successfully updated pod "annotationupdated409a780-5895-11ea-8134-0242ac110008"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:45:24.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hfd6r" for this suite.
Feb 26 12:45:48.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:45:48.993: INFO: namespace: e2e-tests-projected-hfd6r, resource: bindings, ignored listing per whitelist
Feb 26 12:45:49.079: INFO: namespace e2e-tests-projected-hfd6r deletion completed in 24.269844123s

• [SLOW TEST:37.367 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:45:49.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 26 12:45:49.293: INFO: Waiting up to 5m0s for pod "downward-api-ea5119d4-5895-11ea-8134-0242ac110008" in namespace "e2e-tests-downward-api-9p9qk" to be "success or failure"
Feb 26 12:45:49.357: INFO: Pod "downward-api-ea5119d4-5895-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 63.961846ms
Feb 26 12:45:51.374: INFO: Pod "downward-api-ea5119d4-5895-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080846911s
Feb 26 12:45:53.416: INFO: Pod "downward-api-ea5119d4-5895-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122822216s
Feb 26 12:45:56.132: INFO: Pod "downward-api-ea5119d4-5895-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.839325836s
Feb 26 12:45:58.159: INFO: Pod "downward-api-ea5119d4-5895-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.866446271s
Feb 26 12:46:00.190: INFO: Pod "downward-api-ea5119d4-5895-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.896902849s
STEP: Saw pod success
Feb 26 12:46:00.190: INFO: Pod "downward-api-ea5119d4-5895-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:46:00.200: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-ea5119d4-5895-11ea-8134-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 26 12:46:00.376: INFO: Waiting for pod downward-api-ea5119d4-5895-11ea-8134-0242ac110008 to disappear
Feb 26 12:46:00.438: INFO: Pod downward-api-ea5119d4-5895-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:46:00.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9p9qk" for this suite.
Feb 26 12:46:06.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:46:06.848: INFO: namespace: e2e-tests-downward-api-9p9qk, resource: bindings, ignored listing per whitelist
Feb 26 12:46:06.886: INFO: namespace e2e-tests-downward-api-9p9qk deletion completed in 6.439300597s

• [SLOW TEST:17.808 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:46:06.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-f5092bb0-5895-11ea-8134-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 26 12:46:07.304: INFO: Waiting up to 5m0s for pod "pod-configmaps-f50aab5f-5895-11ea-8134-0242ac110008" in namespace "e2e-tests-configmap-qt79g" to be "success or failure"
Feb 26 12:46:07.309: INFO: Pod "pod-configmaps-f50aab5f-5895-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.176437ms
Feb 26 12:46:09.320: INFO: Pod "pod-configmaps-f50aab5f-5895-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016055914s
Feb 26 12:46:11.329: INFO: Pod "pod-configmaps-f50aab5f-5895-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025303117s
Feb 26 12:46:13.375: INFO: Pod "pod-configmaps-f50aab5f-5895-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071638049s
Feb 26 12:46:15.384: INFO: Pod "pod-configmaps-f50aab5f-5895-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080260264s
Feb 26 12:46:17.397: INFO: Pod "pod-configmaps-f50aab5f-5895-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093410653s
STEP: Saw pod success
Feb 26 12:46:17.397: INFO: Pod "pod-configmaps-f50aab5f-5895-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:46:17.402: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f50aab5f-5895-11ea-8134-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 26 12:46:17.665: INFO: Waiting for pod pod-configmaps-f50aab5f-5895-11ea-8134-0242ac110008 to disappear
Feb 26 12:46:17.701: INFO: Pod pod-configmaps-f50aab5f-5895-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:46:17.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-qt79g" for this suite.
Feb 26 12:46:23.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:46:24.058: INFO: namespace: e2e-tests-configmap-qt79g, resource: bindings, ignored listing per whitelist
Feb 26 12:46:24.104: INFO: namespace e2e-tests-configmap-qt79g deletion completed in 6.304357423s

• [SLOW TEST:17.217 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:46:24.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:46:36.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-lh24g" for this suite.
Feb 26 12:46:43.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:46:43.755: INFO: namespace: e2e-tests-kubelet-test-lh24g, resource: bindings, ignored listing per whitelist
Feb 26 12:46:43.851: INFO: namespace e2e-tests-kubelet-test-lh24g deletion completed in 7.180502599s

• [SLOW TEST:19.747 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:46:43.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 26 12:47:10.400: INFO: Container started at 2020-02-26 12:46:51 +0000 UTC, pod became ready at 2020-02-26 12:47:09 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:47:10.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-phprr" for this suite.
Feb 26 12:47:34.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:47:34.621: INFO: namespace: e2e-tests-container-probe-phprr, resource: bindings, ignored listing per whitelist
Feb 26 12:47:34.641: INFO: namespace e2e-tests-container-probe-phprr deletion completed in 24.232124296s

• [SLOW TEST:50.788 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:47:34.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Feb 26 12:47:45.240: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-296191c1-5896-11ea-8134-0242ac110008", GenerateName:"", Namespace:"e2e-tests-pods-lgjgj", SelfLink:"/api/v1/namespaces/e2e-tests-pods-lgjgj/pods/pod-submit-remove-296191c1-5896-11ea-8134-0242ac110008", UID:"296400d7-5896-11ea-a994-fa163e34d433", ResourceVersion:"22982970", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718318055, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"82061224"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-5vcbr", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0015dc140), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5vcbr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000a1f0c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c6f0e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000a1f100)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000a1f120)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000a1f128), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000a1f12c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318055, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318063, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318063, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318055, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000b66be0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000b66d60), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://e51c80b539e039789a8c2de80fe9494b01cbf691a39eb1cdee0db7b92b1d857c"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:48:02.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-lgjgj" for this suite.
Feb 26 12:48:08.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:48:08.814: INFO: namespace: e2e-tests-pods-lgjgj, resource: bindings, ignored listing per whitelist
Feb 26 12:48:08.884: INFO: namespace e2e-tests-pods-lgjgj deletion completed in 6.227466804s

• [SLOW TEST:34.241 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:48:08.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 26 12:48:09.116: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3da79ef3-5896-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-xwzxz" to be "success or failure"
Feb 26 12:48:09.124: INFO: Pod "downwardapi-volume-3da79ef3-5896-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.510893ms
Feb 26 12:48:11.517: INFO: Pod "downwardapi-volume-3da79ef3-5896-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.400942991s
Feb 26 12:48:13.530: INFO: Pod "downwardapi-volume-3da79ef3-5896-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.414528423s
Feb 26 12:48:15.545: INFO: Pod "downwardapi-volume-3da79ef3-5896-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429186269s
Feb 26 12:48:17.557: INFO: Pod "downwardapi-volume-3da79ef3-5896-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.441595797s
Feb 26 12:48:19.654: INFO: Pod "downwardapi-volume-3da79ef3-5896-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.538243104s
STEP: Saw pod success
Feb 26 12:48:19.654: INFO: Pod "downwardapi-volume-3da79ef3-5896-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:48:19.660: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3da79ef3-5896-11ea-8134-0242ac110008 container client-container: 
STEP: delete the pod
Feb 26 12:48:19.833: INFO: Waiting for pod downwardapi-volume-3da79ef3-5896-11ea-8134-0242ac110008 to disappear
Feb 26 12:48:20.059: INFO: Pod downwardapi-volume-3da79ef3-5896-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:48:20.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xwzxz" for this suite.
Feb 26 12:48:26.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:48:26.353: INFO: namespace: e2e-tests-projected-xwzxz, resource: bindings, ignored listing per whitelist
Feb 26 12:48:26.412: INFO: namespace e2e-tests-projected-xwzxz deletion completed in 6.339112745s

• [SLOW TEST:17.528 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:48:26.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Feb 26 12:48:26.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 26 12:48:26.848: INFO: stderr: ""
Feb 26 12:48:26.848: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:48:26.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-528rx" for this suite.
Feb 26 12:48:32.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:48:33.018: INFO: namespace: e2e-tests-kubectl-528rx, resource: bindings, ignored listing per whitelist
Feb 26 12:48:33.025: INFO: namespace e2e-tests-kubectl-528rx deletion completed in 6.156963846s

• [SLOW TEST:6.612 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:48:33.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 26 12:48:33.315: INFO: Waiting up to 5m0s for pod "pod-4bf89dcb-5896-11ea-8134-0242ac110008" in namespace "e2e-tests-emptydir-dcq9s" to be "success or failure"
Feb 26 12:48:33.329: INFO: Pod "pod-4bf89dcb-5896-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.755849ms
Feb 26 12:48:35.348: INFO: Pod "pod-4bf89dcb-5896-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032770241s
Feb 26 12:48:37.359: INFO: Pod "pod-4bf89dcb-5896-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043940543s
Feb 26 12:48:40.587: INFO: Pod "pod-4bf89dcb-5896-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.271496464s
Feb 26 12:48:42.618: INFO: Pod "pod-4bf89dcb-5896-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.303175003s
Feb 26 12:48:44.626: INFO: Pod "pod-4bf89dcb-5896-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.311341543s
STEP: Saw pod success
Feb 26 12:48:44.626: INFO: Pod "pod-4bf89dcb-5896-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:48:44.631: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4bf89dcb-5896-11ea-8134-0242ac110008 container test-container: 
STEP: delete the pod
Feb 26 12:48:44.728: INFO: Waiting for pod pod-4bf89dcb-5896-11ea-8134-0242ac110008 to disappear
Feb 26 12:48:44.844: INFO: Pod pod-4bf89dcb-5896-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:48:44.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-dcq9s" for this suite.
Feb 26 12:48:52.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:48:53.257: INFO: namespace: e2e-tests-emptydir-dcq9s, resource: bindings, ignored listing per whitelist
Feb 26 12:48:53.301: INFO: namespace e2e-tests-emptydir-dcq9s deletion completed in 8.440990986s

• [SLOW TEST:20.277 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:48:53.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 26 12:48:53.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-kq9pf'
Feb 26 12:48:55.427: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 26 12:48:55.427: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb 26 12:48:55.465: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb 26 12:48:55.523: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb 26 12:48:55.575: INFO: scanned /root for discovery docs: 
Feb 26 12:48:55.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-kq9pf'
Feb 26 12:49:22.013: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 26 12:49:22.014: INFO: stdout: "Created e2e-test-nginx-rc-733d73b3256e94dd14f194501abca145\nScaling up e2e-test-nginx-rc-733d73b3256e94dd14f194501abca145 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-733d73b3256e94dd14f194501abca145 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-733d73b3256e94dd14f194501abca145 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb 26 12:49:22.014: INFO: stdout: "Created e2e-test-nginx-rc-733d73b3256e94dd14f194501abca145\nScaling up e2e-test-nginx-rc-733d73b3256e94dd14f194501abca145 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-733d73b3256e94dd14f194501abca145 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-733d73b3256e94dd14f194501abca145 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb 26 12:49:22.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-kq9pf'
Feb 26 12:49:22.298: INFO: stderr: ""
Feb 26 12:49:22.298: INFO: stdout: "e2e-test-nginx-rc-733d73b3256e94dd14f194501abca145-p8lk4 e2e-test-nginx-rc-g8rp7 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 26 12:49:27.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-kq9pf'
Feb 26 12:49:27.474: INFO: stderr: ""
Feb 26 12:49:27.474: INFO: stdout: "e2e-test-nginx-rc-733d73b3256e94dd14f194501abca145-p8lk4 "
Feb 26 12:49:27.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-733d73b3256e94dd14f194501abca145-p8lk4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kq9pf'
Feb 26 12:49:27.608: INFO: stderr: ""
Feb 26 12:49:27.608: INFO: stdout: "true"
Feb 26 12:49:27.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-733d73b3256e94dd14f194501abca145-p8lk4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kq9pf'
Feb 26 12:49:27.729: INFO: stderr: ""
Feb 26 12:49:27.729: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb 26 12:49:27.729: INFO: e2e-test-nginx-rc-733d73b3256e94dd14f194501abca145-p8lk4 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Feb 26 12:49:27.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-kq9pf'
Feb 26 12:49:27.982: INFO: stderr: ""
Feb 26 12:49:27.983: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:49:27.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kq9pf" for this suite.
Feb 26 12:49:52.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:49:52.215: INFO: namespace: e2e-tests-kubectl-kq9pf, resource: bindings, ignored listing per whitelist
Feb 26 12:49:52.231: INFO: namespace e2e-tests-kubectl-kq9pf deletion completed in 24.236131745s

• [SLOW TEST:58.928 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:49:52.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 26 12:49:52.469: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b3ae38d-5896-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-9lx5t" to be "success or failure"
Feb 26 12:49:52.491: INFO: Pod "downwardapi-volume-7b3ae38d-5896-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 21.688536ms
Feb 26 12:49:54.526: INFO: Pod "downwardapi-volume-7b3ae38d-5896-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05625535s
Feb 26 12:49:56.548: INFO: Pod "downwardapi-volume-7b3ae38d-5896-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078561617s
Feb 26 12:49:58.966: INFO: Pod "downwardapi-volume-7b3ae38d-5896-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.496552368s
Feb 26 12:50:00.990: INFO: Pod "downwardapi-volume-7b3ae38d-5896-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.519958046s
Feb 26 12:50:03.022: INFO: Pod "downwardapi-volume-7b3ae38d-5896-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.55222998s
STEP: Saw pod success
Feb 26 12:50:03.022: INFO: Pod "downwardapi-volume-7b3ae38d-5896-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:50:03.031: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7b3ae38d-5896-11ea-8134-0242ac110008 container client-container: 
STEP: delete the pod
Feb 26 12:50:03.406: INFO: Waiting for pod downwardapi-volume-7b3ae38d-5896-11ea-8134-0242ac110008 to disappear
Feb 26 12:50:03.417: INFO: Pod downwardapi-volume-7b3ae38d-5896-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:50:03.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9lx5t" for this suite.
Feb 26 12:50:09.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:50:09.740: INFO: namespace: e2e-tests-projected-9lx5t, resource: bindings, ignored listing per whitelist
Feb 26 12:50:09.814: INFO: namespace e2e-tests-projected-9lx5t deletion completed in 6.384910628s

• [SLOW TEST:17.583 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:50:09.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-85c0f1ad-5896-11ea-8134-0242ac110008
STEP: Creating secret with name s-test-opt-upd-85c0f291-5896-11ea-8134-0242ac110008
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-85c0f1ad-5896-11ea-8134-0242ac110008
STEP: Updating secret s-test-opt-upd-85c0f291-5896-11ea-8134-0242ac110008
STEP: Creating secret with name s-test-opt-create-85c0f34d-5896-11ea-8134-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:50:29.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-fsshw" for this suite.
Feb 26 12:50:55.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:50:55.381: INFO: namespace: e2e-tests-secrets-fsshw, resource: bindings, ignored listing per whitelist
Feb 26 12:50:55.406: INFO: namespace e2e-tests-secrets-fsshw deletion completed in 26.271301329s

• [SLOW TEST:45.592 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:50:55.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-fx7nt
Feb 26 12:51:07.669: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-fx7nt
STEP: checking the pod's current state and verifying that restartCount is present
Feb 26 12:51:07.674: INFO: Initial restart count of pod liveness-exec is 0
Feb 26 12:52:02.664: INFO: Restart count of pod e2e-tests-container-probe-fx7nt/liveness-exec is now 1 (54.989923987s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:52:02.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-fx7nt" for this suite.
Feb 26 12:52:10.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:52:11.033: INFO: namespace: e2e-tests-container-probe-fx7nt, resource: bindings, ignored listing per whitelist
Feb 26 12:52:11.135: INFO: namespace e2e-tests-container-probe-fx7nt deletion completed in 8.351165592s

• [SLOW TEST:75.727 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:52:11.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 26 12:52:11.431: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 26 12:52:16.456: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:52:18.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-cfs49" for this suite.
Feb 26 12:52:31.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:52:31.981: INFO: namespace: e2e-tests-replication-controller-cfs49, resource: bindings, ignored listing per whitelist
Feb 26 12:52:32.041: INFO: namespace e2e-tests-replication-controller-cfs49 deletion completed in 13.703897798s

• [SLOW TEST:20.905 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:52:32.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 26 12:52:32.225: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pmb7p,SelfLink:/api/v1/namespaces/e2e-tests-watch-pmb7p/configmaps/e2e-watch-test-watch-closed,UID:da76da42-5896-11ea-a994-fa163e34d433,ResourceVersion:22983597,Generation:0,CreationTimestamp:2020-02-26 12:52:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 26 12:52:32.226: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pmb7p,SelfLink:/api/v1/namespaces/e2e-tests-watch-pmb7p/configmaps/e2e-watch-test-watch-closed,UID:da76da42-5896-11ea-a994-fa163e34d433,ResourceVersion:22983598,Generation:0,CreationTimestamp:2020-02-26 12:52:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 26 12:52:32.252: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pmb7p,SelfLink:/api/v1/namespaces/e2e-tests-watch-pmb7p/configmaps/e2e-watch-test-watch-closed,UID:da76da42-5896-11ea-a994-fa163e34d433,ResourceVersion:22983599,Generation:0,CreationTimestamp:2020-02-26 12:52:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 26 12:52:32.252: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pmb7p,SelfLink:/api/v1/namespaces/e2e-tests-watch-pmb7p/configmaps/e2e-watch-test-watch-closed,UID:da76da42-5896-11ea-a994-fa163e34d433,ResourceVersion:22983600,Generation:0,CreationTimestamp:2020-02-26 12:52:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:52:32.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-pmb7p" for this suite.
Feb 26 12:52:38.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:52:38.480: INFO: namespace: e2e-tests-watch-pmb7p, resource: bindings, ignored listing per whitelist
Feb 26 12:52:38.500: INFO: namespace e2e-tests-watch-pmb7p deletion completed in 6.222362834s

• [SLOW TEST:6.459 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:52:38.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 26 12:53:00.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 12:53:00.935: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 12:53:02.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 12:53:02.950: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 12:53:04.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 12:53:04.955: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 12:53:06.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 12:53:06.952: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 12:53:08.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 12:53:08.946: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 12:53:10.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 12:53:10.980: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 12:53:12.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 12:53:12.950: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 12:53:14.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 12:53:14.964: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 12:53:16.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 12:53:16.958: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 12:53:18.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 12:53:19.644: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 12:53:20.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 12:53:20.947: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 12:53:22.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 12:53:22.959: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 12:53:24.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 12:53:24.969: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 12:53:26.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 12:53:26.959: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 12:53:28.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 12:53:28.950: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 12:53:30.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 12:53:30.965: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 26 12:53:32.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 26 12:53:32.953: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:53:32.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-jlp7b" for this suite.
Feb 26 12:53:59.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:53:59.257: INFO: namespace: e2e-tests-container-lifecycle-hook-jlp7b, resource: bindings, ignored listing per whitelist
Feb 26 12:53:59.339: INFO: namespace e2e-tests-container-lifecycle-hook-jlp7b deletion completed in 26.346499496s

• [SLOW TEST:80.838 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:53:59.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-0ea62f96-5897-11ea-8134-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 26 12:53:59.768: INFO: Waiting up to 5m0s for pod "pod-secrets-0ea83cd1-5897-11ea-8134-0242ac110008" in namespace "e2e-tests-secrets-9zxz7" to be "success or failure"
Feb 26 12:53:59.837: INFO: Pod "pod-secrets-0ea83cd1-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 68.833746ms
Feb 26 12:54:02.031: INFO: Pod "pod-secrets-0ea83cd1-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.262502815s
Feb 26 12:54:04.057: INFO: Pod "pod-secrets-0ea83cd1-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.288352697s
Feb 26 12:54:06.472: INFO: Pod "pod-secrets-0ea83cd1-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.70309152s
Feb 26 12:54:08.492: INFO: Pod "pod-secrets-0ea83cd1-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.723568916s
Feb 26 12:54:10.553: INFO: Pod "pod-secrets-0ea83cd1-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.784052818s
Feb 26 12:54:12.614: INFO: Pod "pod-secrets-0ea83cd1-5897-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.845191402s
STEP: Saw pod success
Feb 26 12:54:12.614: INFO: Pod "pod-secrets-0ea83cd1-5897-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:54:12.630: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-0ea83cd1-5897-11ea-8134-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 26 12:54:12.802: INFO: Waiting for pod pod-secrets-0ea83cd1-5897-11ea-8134-0242ac110008 to disappear
Feb 26 12:54:12.811: INFO: Pod pod-secrets-0ea83cd1-5897-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:54:12.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-9zxz7" for this suite.
Feb 26 12:54:20.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:54:21.432: INFO: namespace: e2e-tests-secrets-9zxz7, resource: bindings, ignored listing per whitelist
Feb 26 12:54:21.646: INFO: namespace e2e-tests-secrets-9zxz7 deletion completed in 8.827184179s

• [SLOW TEST:22.306 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:54:21.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 26 12:54:22.933: INFO: Waiting up to 5m0s for pod "pod-1c7215ce-5897-11ea-8134-0242ac110008" in namespace "e2e-tests-emptydir-5d2g2" to be "success or failure"
Feb 26 12:54:23.055: INFO: Pod "pod-1c7215ce-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 122.142178ms
Feb 26 12:54:25.067: INFO: Pod "pod-1c7215ce-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133825163s
Feb 26 12:54:27.098: INFO: Pod "pod-1c7215ce-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165136305s
Feb 26 12:54:29.214: INFO: Pod "pod-1c7215ce-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.281381618s
Feb 26 12:54:31.232: INFO: Pod "pod-1c7215ce-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.298910724s
Feb 26 12:54:33.292: INFO: Pod "pod-1c7215ce-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.358786282s
Feb 26 12:54:35.301: INFO: Pod "pod-1c7215ce-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.368158339s
Feb 26 12:54:37.316: INFO: Pod "pod-1c7215ce-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.382858863s
Feb 26 12:54:39.329: INFO: Pod "pod-1c7215ce-5897-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.395646881s
STEP: Saw pod success
Feb 26 12:54:39.329: INFO: Pod "pod-1c7215ce-5897-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:54:39.332: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1c7215ce-5897-11ea-8134-0242ac110008 container test-container: 
STEP: delete the pod
Feb 26 12:54:39.558: INFO: Waiting for pod pod-1c7215ce-5897-11ea-8134-0242ac110008 to disappear
Feb 26 12:54:39.575: INFO: Pod pod-1c7215ce-5897-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:54:39.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-5d2g2" for this suite.
Feb 26 12:54:50.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:54:50.826: INFO: namespace: e2e-tests-emptydir-5d2g2, resource: bindings, ignored listing per whitelist
Feb 26 12:54:50.984: INFO: namespace e2e-tests-emptydir-5d2g2 deletion completed in 11.398630322s

• [SLOW TEST:29.337 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:54:50.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb 26 12:54:51.382: INFO: namespace e2e-tests-kubectl-kp5hv
Feb 26 12:54:51.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kp5hv'
Feb 26 12:54:51.832: INFO: stderr: ""
Feb 26 12:54:51.832: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 26 12:54:52.959: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:54:52.960: INFO: Found 0 / 1
Feb 26 12:54:55.840: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:54:55.841: INFO: Found 0 / 1
Feb 26 12:54:56.877: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:54:56.878: INFO: Found 0 / 1
Feb 26 12:54:57.854: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:54:57.854: INFO: Found 0 / 1
Feb 26 12:54:58.845: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:54:58.845: INFO: Found 0 / 1
Feb 26 12:55:00.543: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:55:00.543: INFO: Found 0 / 1
Feb 26 12:55:00.860: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:55:00.861: INFO: Found 0 / 1
Feb 26 12:55:04.844: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:55:04.845: INFO: Found 0 / 1
Feb 26 12:55:06.685: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:55:06.685: INFO: Found 0 / 1
Feb 26 12:55:06.923: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:55:06.923: INFO: Found 0 / 1
Feb 26 12:55:07.867: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:55:07.868: INFO: Found 0 / 1
Feb 26 12:55:08.933: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:55:08.933: INFO: Found 0 / 1
Feb 26 12:55:09.868: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:55:09.869: INFO: Found 1 / 1
Feb 26 12:55:09.869: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 26 12:55:09.887: INFO: Selector matched 1 pods for map[app:redis]
Feb 26 12:55:09.887: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 26 12:55:09.887: INFO: wait on redis-master startup in e2e-tests-kubectl-kp5hv 
Feb 26 12:55:09.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-hrcd2 redis-master --namespace=e2e-tests-kubectl-kp5hv'
Feb 26 12:55:10.057: INFO: stderr: ""
Feb 26 12:55:10.057: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 26 Feb 12:55:08.641 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 Feb 12:55:08.642 # Server started, Redis version 3.2.12\n1:M 26 Feb 12:55:08.643 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 Feb 12:55:08.643 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb 26 12:55:10.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-kp5hv'
Feb 26 12:55:10.350: INFO: stderr: ""
Feb 26 12:55:10.350: INFO: stdout: "service/rm2 exposed\n"
Feb 26 12:55:10.373: INFO: Service rm2 in namespace e2e-tests-kubectl-kp5hv found.
STEP: exposing service
Feb 26 12:55:12.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-kp5hv'
Feb 26 12:55:12.925: INFO: stderr: ""
Feb 26 12:55:12.926: INFO: stdout: "service/rm3 exposed\n"
Feb 26 12:55:12.954: INFO: Service rm3 in namespace e2e-tests-kubectl-kp5hv found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:55:14.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kp5hv" for this suite.
Feb 26 12:55:33.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:55:33.134: INFO: namespace: e2e-tests-kubectl-kp5hv, resource: bindings, ignored listing per whitelist
Feb 26 12:55:33.158: INFO: namespace e2e-tests-kubectl-kp5hv deletion completed in 18.168673435s

• [SLOW TEST:42.173 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:55:33.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 26 12:55:33.346: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:56:02.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-wgz6d" for this suite.
Feb 26 12:56:26.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:56:26.175: INFO: namespace: e2e-tests-init-container-wgz6d, resource: bindings, ignored listing per whitelist
Feb 26 12:56:26.230: INFO: namespace e2e-tests-init-container-wgz6d deletion completed in 24.186093358s

• [SLOW TEST:53.071 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:56:26.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 26 12:56:26.466: INFO: Waiting up to 5m0s for pod "downward-api-6618a73d-5897-11ea-8134-0242ac110008" in namespace "e2e-tests-downward-api-cc7pw" to be "success or failure"
Feb 26 12:56:26.481: INFO: Pod "downward-api-6618a73d-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.043804ms
Feb 26 12:56:28.748: INFO: Pod "downward-api-6618a73d-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.281262935s
Feb 26 12:56:30.787: INFO: Pod "downward-api-6618a73d-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320533803s
Feb 26 12:56:33.093: INFO: Pod "downward-api-6618a73d-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.626439738s
Feb 26 12:56:35.148: INFO: Pod "downward-api-6618a73d-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.681566644s
Feb 26 12:56:37.188: INFO: Pod "downward-api-6618a73d-5897-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.72184282s
STEP: Saw pod success
Feb 26 12:56:37.189: INFO: Pod "downward-api-6618a73d-5897-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:56:37.256: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-6618a73d-5897-11ea-8134-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 26 12:56:37.459: INFO: Waiting for pod downward-api-6618a73d-5897-11ea-8134-0242ac110008 to disappear
Feb 26 12:56:37.478: INFO: Pod downward-api-6618a73d-5897-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:56:37.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-cc7pw" for this suite.
Feb 26 12:56:43.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:56:44.084: INFO: namespace: e2e-tests-downward-api-cc7pw, resource: bindings, ignored listing per whitelist
Feb 26 12:56:44.133: INFO: namespace e2e-tests-downward-api-cc7pw deletion completed in 6.635287633s

• [SLOW TEST:17.903 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:56:44.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 26 12:56:44.551: INFO: PodSpec: initContainers in spec.initContainers
Feb 26 12:58:01.828: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-70e3ed6e-5897-11ea-8134-0242ac110008", GenerateName:"", Namespace:"e2e-tests-init-container-z98cq", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-z98cq/pods/pod-init-70e3ed6e-5897-11ea-8134-0242ac110008", UID:"70e58aa7-5897-11ea-a994-fa163e34d433", ResourceVersion:"22984204", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718318604, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"551348599"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-v4d44", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001de8340), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v4d44", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v4d44", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v4d44", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000a4a8e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c07d40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000a4a9e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000a4aa00)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000a4aa08), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000a4aa0c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318605, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318605, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318605, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318604, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0008560e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00147cfc0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00147d030)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://02d85869f6ed8c1a8d4de685850d0892ff0c7b7cb638b64ffe87049f0828342c"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000856120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000856100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:58:01.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-z98cq" for this suite.
Feb 26 12:58:25.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:58:26.064: INFO: namespace: e2e-tests-init-container-z98cq, resource: bindings, ignored listing per whitelist
Feb 26 12:58:26.187: INFO: namespace e2e-tests-init-container-z98cq deletion completed in 24.233540069s

• [SLOW TEST:102.053 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:58:26.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-ad9a7e96-5897-11ea-8134-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 26 12:58:26.534: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ad9b26bd-5897-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-gw2bv" to be "success or failure"
Feb 26 12:58:26.590: INFO: Pod "pod-projected-configmaps-ad9b26bd-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 55.238897ms
Feb 26 12:58:28.902: INFO: Pod "pod-projected-configmaps-ad9b26bd-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.36796782s
Feb 26 12:58:30.938: INFO: Pod "pod-projected-configmaps-ad9b26bd-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.403405069s
Feb 26 12:58:33.325: INFO: Pod "pod-projected-configmaps-ad9b26bd-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.790674436s
Feb 26 12:58:35.341: INFO: Pod "pod-projected-configmaps-ad9b26bd-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.806201148s
Feb 26 12:58:39.055: INFO: Pod "pod-projected-configmaps-ad9b26bd-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.521032927s
Feb 26 12:58:41.656: INFO: Pod "pod-projected-configmaps-ad9b26bd-5897-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.121389084s
STEP: Saw pod success
Feb 26 12:58:41.656: INFO: Pod "pod-projected-configmaps-ad9b26bd-5897-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:58:41.667: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-ad9b26bd-5897-11ea-8134-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 26 12:58:42.055: INFO: Waiting for pod pod-projected-configmaps-ad9b26bd-5897-11ea-8134-0242ac110008 to disappear
Feb 26 12:58:42.066: INFO: Pod pod-projected-configmaps-ad9b26bd-5897-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:58:42.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gw2bv" for this suite.
Feb 26 12:58:50.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:58:50.288: INFO: namespace: e2e-tests-projected-gw2bv, resource: bindings, ignored listing per whitelist
Feb 26 12:58:50.324: INFO: namespace e2e-tests-projected-gw2bv deletion completed in 8.239454853s

• [SLOW TEST:24.137 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:58:50.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-bc1529f7-5897-11ea-8134-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 26 12:58:50.751: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bc188309-5897-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-cjqs4" to be "success or failure"
Feb 26 12:58:50.762: INFO: Pod "pod-projected-configmaps-bc188309-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.899424ms
Feb 26 12:58:53.028: INFO: Pod "pod-projected-configmaps-bc188309-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.276676662s
Feb 26 12:58:55.041: INFO: Pod "pod-projected-configmaps-bc188309-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.289918094s
Feb 26 12:58:57.278: INFO: Pod "pod-projected-configmaps-bc188309-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.527191641s
Feb 26 12:58:59.705: INFO: Pod "pod-projected-configmaps-bc188309-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.954321588s
Feb 26 12:59:01.719: INFO: Pod "pod-projected-configmaps-bc188309-5897-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.967845375s
STEP: Saw pod success
Feb 26 12:59:01.719: INFO: Pod "pod-projected-configmaps-bc188309-5897-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 12:59:01.724: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-bc188309-5897-11ea-8134-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 26 12:59:02.375: INFO: Waiting for pod pod-projected-configmaps-bc188309-5897-11ea-8134-0242ac110008 to disappear
Feb 26 12:59:02.392: INFO: Pod pod-projected-configmaps-bc188309-5897-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:59:02.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cjqs4" for this suite.
Feb 26 12:59:08.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:59:08.657: INFO: namespace: e2e-tests-projected-cjqs4, resource: bindings, ignored listing per whitelist
Feb 26 12:59:08.680: INFO: namespace e2e-tests-projected-cjqs4 deletion completed in 6.278320602s

• [SLOW TEST:18.356 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:59:08.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-tskbp.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-tskbp.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-tskbp.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-tskbp.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-tskbp.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-tskbp.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 26 12:59:29.146: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-tskbp/dns-test-c6ed03a1-5897-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-c6ed03a1-5897-11ea-8134-0242ac110008)
Feb 26 12:59:29.150: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-tskbp/dns-test-c6ed03a1-5897-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-c6ed03a1-5897-11ea-8134-0242ac110008)
Feb 26 12:59:29.153: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-tskbp/dns-test-c6ed03a1-5897-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-c6ed03a1-5897-11ea-8134-0242ac110008)
Feb 26 12:59:29.156: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-tskbp/dns-test-c6ed03a1-5897-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-c6ed03a1-5897-11ea-8134-0242ac110008)
Feb 26 12:59:29.160: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-tskbp/dns-test-c6ed03a1-5897-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-c6ed03a1-5897-11ea-8134-0242ac110008)
Feb 26 12:59:29.169: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-tskbp/dns-test-c6ed03a1-5897-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-c6ed03a1-5897-11ea-8134-0242ac110008)
Feb 26 12:59:29.175: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-tskbp/dns-test-c6ed03a1-5897-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-c6ed03a1-5897-11ea-8134-0242ac110008)
Feb 26 12:59:29.180: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-tskbp/dns-test-c6ed03a1-5897-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-c6ed03a1-5897-11ea-8134-0242ac110008)
Feb 26 12:59:29.184: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-tskbp.svc.cluster.local from pod e2e-tests-dns-tskbp/dns-test-c6ed03a1-5897-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-c6ed03a1-5897-11ea-8134-0242ac110008)
Feb 26 12:59:29.192: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-tskbp/dns-test-c6ed03a1-5897-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-c6ed03a1-5897-11ea-8134-0242ac110008)
Feb 26 12:59:29.197: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-tskbp/dns-test-c6ed03a1-5897-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-c6ed03a1-5897-11ea-8134-0242ac110008)
Feb 26 12:59:29.201: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-tskbp/dns-test-c6ed03a1-5897-11ea-8134-0242ac110008: the server could not find the requested resource (get pods dns-test-c6ed03a1-5897-11ea-8134-0242ac110008)
Feb 26 12:59:29.201: INFO: Lookups using e2e-tests-dns-tskbp/dns-test-c6ed03a1-5897-11ea-8134-0242ac110008 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-tskbp.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 26 12:59:34.417: INFO: DNS probes using e2e-tests-dns-tskbp/dns-test-c6ed03a1-5897-11ea-8134-0242ac110008 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 12:59:34.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-tskbp" for this suite.
Feb 26 12:59:42.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 12:59:42.757: INFO: namespace: e2e-tests-dns-tskbp, resource: bindings, ignored listing per whitelist
Feb 26 12:59:42.809: INFO: namespace e2e-tests-dns-tskbp deletion completed in 8.287275865s

• [SLOW TEST:34.129 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 12:59:42.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-hqdm
STEP: Creating a pod to test atomic-volume-subpath
Feb 26 12:59:43.069: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hqdm" in namespace "e2e-tests-subpath-t9w6j" to be "success or failure"
Feb 26 12:59:43.149: INFO: Pod "pod-subpath-test-configmap-hqdm": Phase="Pending", Reason="", readiness=false. Elapsed: 79.942609ms
Feb 26 12:59:45.256: INFO: Pod "pod-subpath-test-configmap-hqdm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186645593s
Feb 26 12:59:47.268: INFO: Pod "pod-subpath-test-configmap-hqdm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198683978s
Feb 26 12:59:49.842: INFO: Pod "pod-subpath-test-configmap-hqdm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.773054327s
Feb 26 12:59:52.026: INFO: Pod "pod-subpath-test-configmap-hqdm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.957419202s
Feb 26 12:59:54.149: INFO: Pod "pod-subpath-test-configmap-hqdm": Phase="Pending", Reason="", readiness=false. Elapsed: 11.080394374s
Feb 26 12:59:56.367: INFO: Pod "pod-subpath-test-configmap-hqdm": Phase="Pending", Reason="", readiness=false. Elapsed: 13.297660541s
Feb 26 13:00:00.329: INFO: Pod "pod-subpath-test-configmap-hqdm": Phase="Pending", Reason="", readiness=false. Elapsed: 17.26017367s
Feb 26 13:00:02.342: INFO: Pod "pod-subpath-test-configmap-hqdm": Phase="Running", Reason="", readiness=false. Elapsed: 19.272910886s
Feb 26 13:00:04.363: INFO: Pod "pod-subpath-test-configmap-hqdm": Phase="Running", Reason="", readiness=false. Elapsed: 21.293738015s
Feb 26 13:00:06.393: INFO: Pod "pod-subpath-test-configmap-hqdm": Phase="Running", Reason="", readiness=false. Elapsed: 23.32387564s
Feb 26 13:00:08.418: INFO: Pod "pod-subpath-test-configmap-hqdm": Phase="Running", Reason="", readiness=false. Elapsed: 25.349246864s
Feb 26 13:00:10.470: INFO: Pod "pod-subpath-test-configmap-hqdm": Phase="Running", Reason="", readiness=false. Elapsed: 27.401020835s
Feb 26 13:00:12.498: INFO: Pod "pod-subpath-test-configmap-hqdm": Phase="Running", Reason="", readiness=false. Elapsed: 29.429442936s
Feb 26 13:00:14.526: INFO: Pod "pod-subpath-test-configmap-hqdm": Phase="Running", Reason="", readiness=false. Elapsed: 31.457319687s
Feb 26 13:00:16.565: INFO: Pod "pod-subpath-test-configmap-hqdm": Phase="Running", Reason="", readiness=false. Elapsed: 33.496118912s
Feb 26 13:00:18.584: INFO: Pod "pod-subpath-test-configmap-hqdm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.515234679s
STEP: Saw pod success
Feb 26 13:00:18.584: INFO: Pod "pod-subpath-test-configmap-hqdm" satisfied condition "success or failure"
Feb 26 13:00:18.589: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-hqdm container test-container-subpath-configmap-hqdm: 
STEP: delete the pod
Feb 26 13:00:19.374: INFO: Waiting for pod pod-subpath-test-configmap-hqdm to disappear
Feb 26 13:00:19.633: INFO: Pod pod-subpath-test-configmap-hqdm no longer exists
STEP: Deleting pod pod-subpath-test-configmap-hqdm
Feb 26 13:00:19.633: INFO: Deleting pod "pod-subpath-test-configmap-hqdm" in namespace "e2e-tests-subpath-t9w6j"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:00:19.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-t9w6j" for this suite.
Feb 26 13:00:25.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:00:25.902: INFO: namespace: e2e-tests-subpath-t9w6j, resource: bindings, ignored listing per whitelist
Feb 26 13:00:26.042: INFO: namespace e2e-tests-subpath-t9w6j deletion completed in 6.374835059s

• [SLOW TEST:43.233 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:00:26.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-f4ffb9cd-5897-11ea-8134-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 26 13:00:26.274: INFO: Waiting up to 5m0s for pod "pod-secrets-f501248c-5897-11ea-8134-0242ac110008" in namespace "e2e-tests-secrets-d5gwt" to be "success or failure"
Feb 26 13:00:26.285: INFO: Pod "pod-secrets-f501248c-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.275953ms
Feb 26 13:00:28.306: INFO: Pod "pod-secrets-f501248c-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031139534s
Feb 26 13:00:30.361: INFO: Pod "pod-secrets-f501248c-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08684813s
Feb 26 13:00:32.481: INFO: Pod "pod-secrets-f501248c-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206190539s
Feb 26 13:00:34.505: INFO: Pod "pod-secrets-f501248c-5897-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.230532279s
Feb 26 13:00:36.530: INFO: Pod "pod-secrets-f501248c-5897-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.25537477s
STEP: Saw pod success
Feb 26 13:00:36.530: INFO: Pod "pod-secrets-f501248c-5897-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 13:00:36.542: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-f501248c-5897-11ea-8134-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 26 13:00:37.415: INFO: Waiting for pod pod-secrets-f501248c-5897-11ea-8134-0242ac110008 to disappear
Feb 26 13:00:37.752: INFO: Pod pod-secrets-f501248c-5897-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:00:37.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-d5gwt" for this suite.
Feb 26 13:00:43.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:00:44.108: INFO: namespace: e2e-tests-secrets-d5gwt, resource: bindings, ignored listing per whitelist
Feb 26 13:00:44.131: INFO: namespace e2e-tests-secrets-d5gwt deletion completed in 6.363624655s

• [SLOW TEST:18.088 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:00:44.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 26 13:00:44.556: INFO: Number of nodes with available pods: 0
Feb 26 13:00:44.557: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:00:45.577: INFO: Number of nodes with available pods: 0
Feb 26 13:00:45.577: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:00:46.876: INFO: Number of nodes with available pods: 0
Feb 26 13:00:46.876: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:00:47.583: INFO: Number of nodes with available pods: 0
Feb 26 13:00:47.583: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:00:48.592: INFO: Number of nodes with available pods: 0
Feb 26 13:00:48.592: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:00:49.584: INFO: Number of nodes with available pods: 0
Feb 26 13:00:49.585: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:00:50.992: INFO: Number of nodes with available pods: 0
Feb 26 13:00:50.992: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:00:51.656: INFO: Number of nodes with available pods: 0
Feb 26 13:00:51.657: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:00:52.747: INFO: Number of nodes with available pods: 0
Feb 26 13:00:52.747: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:00:53.609: INFO: Number of nodes with available pods: 0
Feb 26 13:00:53.609: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:00:54.587: INFO: Number of nodes with available pods: 0
Feb 26 13:00:54.588: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:00:55.585: INFO: Number of nodes with available pods: 1
Feb 26 13:00:55.585: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 26 13:00:55.739: INFO: Number of nodes with available pods: 0
Feb 26 13:00:55.739: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:00:56.765: INFO: Number of nodes with available pods: 0
Feb 26 13:00:56.765: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:00:57.829: INFO: Number of nodes with available pods: 0
Feb 26 13:00:57.829: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:00:58.793: INFO: Number of nodes with available pods: 0
Feb 26 13:00:58.793: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:00:59.970: INFO: Number of nodes with available pods: 0
Feb 26 13:00:59.970: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:01:00.782: INFO: Number of nodes with available pods: 0
Feb 26 13:01:00.782: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:01:02.576: INFO: Number of nodes with available pods: 0
Feb 26 13:01:02.577: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:01:03.222: INFO: Number of nodes with available pods: 0
Feb 26 13:01:03.222: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:01:04.262: INFO: Number of nodes with available pods: 0
Feb 26 13:01:04.263: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:01:04.841: INFO: Number of nodes with available pods: 0
Feb 26 13:01:04.842: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:01:05.759: INFO: Number of nodes with available pods: 0
Feb 26 13:01:05.759: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:01:06.819: INFO: Number of nodes with available pods: 0
Feb 26 13:01:06.819: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 26 13:01:07.757: INFO: Number of nodes with available pods: 1
Feb 26 13:01:07.757: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-5gtjv, will wait for the garbage collector to delete the pods
Feb 26 13:01:07.864: INFO: Deleting DaemonSet.extensions daemon-set took: 48.272254ms
Feb 26 13:01:08.065: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.887002ms
Feb 26 13:01:22.684: INFO: Number of nodes with available pods: 0
Feb 26 13:01:22.685: INFO: Number of running nodes: 0, number of available pods: 0
Feb 26 13:01:22.693: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-5gtjv/daemonsets","resourceVersion":"22984641"},"items":null}

Feb 26 13:01:22.696: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-5gtjv/pods","resourceVersion":"22984641"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:01:22.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-5gtjv" for this suite.
Feb 26 13:01:30.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:01:30.829: INFO: namespace: e2e-tests-daemonsets-5gtjv, resource: bindings, ignored listing per whitelist
Feb 26 13:01:30.897: INFO: namespace e2e-tests-daemonsets-5gtjv deletion completed in 8.18862109s

• [SLOW TEST:46.765 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:01:30.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 26 13:01:41.894: INFO: Successfully updated pod "pod-update-activedeadlineseconds-1bae5c36-5898-11ea-8134-0242ac110008"
Feb 26 13:01:41.894: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-1bae5c36-5898-11ea-8134-0242ac110008" in namespace "e2e-tests-pods-2vnrz" to be "terminated due to deadline exceeded"
Feb 26 13:01:41.909: INFO: Pod "pod-update-activedeadlineseconds-1bae5c36-5898-11ea-8134-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 14.82128ms
Feb 26 13:01:43.930: INFO: Pod "pod-update-activedeadlineseconds-1bae5c36-5898-11ea-8134-0242ac110008": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.036346131s
Feb 26 13:01:43.931: INFO: Pod "pod-update-activedeadlineseconds-1bae5c36-5898-11ea-8134-0242ac110008" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:01:43.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-2vnrz" for this suite.
Feb 26 13:01:50.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:01:50.086: INFO: namespace: e2e-tests-pods-2vnrz, resource: bindings, ignored listing per whitelist
Feb 26 13:01:50.168: INFO: namespace e2e-tests-pods-2vnrz deletion completed in 6.217367097s

• [SLOW TEST:19.271 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:01:50.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-4skll/secret-test-2723cc7f-5898-11ea-8134-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 26 13:01:50.403: INFO: Waiting up to 5m0s for pod "pod-configmaps-272da2b2-5898-11ea-8134-0242ac110008" in namespace "e2e-tests-secrets-4skll" to be "success or failure"
Feb 26 13:01:50.451: INFO: Pod "pod-configmaps-272da2b2-5898-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 48.253162ms
Feb 26 13:01:52.580: INFO: Pod "pod-configmaps-272da2b2-5898-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17759788s
Feb 26 13:01:54.603: INFO: Pod "pod-configmaps-272da2b2-5898-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199884593s
Feb 26 13:01:58.117: INFO: Pod "pod-configmaps-272da2b2-5898-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.714446098s
Feb 26 13:02:00.140: INFO: Pod "pod-configmaps-272da2b2-5898-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.737520778s
Feb 26 13:02:02.364: INFO: Pod "pod-configmaps-272da2b2-5898-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.961392749s
STEP: Saw pod success
Feb 26 13:02:02.364: INFO: Pod "pod-configmaps-272da2b2-5898-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 13:02:02.701: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-272da2b2-5898-11ea-8134-0242ac110008 container env-test: 
STEP: delete the pod
Feb 26 13:02:03.550: INFO: Waiting for pod pod-configmaps-272da2b2-5898-11ea-8134-0242ac110008 to disappear
Feb 26 13:02:03.570: INFO: Pod pod-configmaps-272da2b2-5898-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:02:03.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4skll" for this suite.
Feb 26 13:02:09.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:02:09.916: INFO: namespace: e2e-tests-secrets-4skll, resource: bindings, ignored listing per whitelist
Feb 26 13:02:09.982: INFO: namespace e2e-tests-secrets-4skll deletion completed in 6.405528348s

• [SLOW TEST:19.813 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:02:09.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 26 13:02:10.234: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 26 13:02:10.365: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 26 13:02:15.380: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 26 13:02:21.393: INFO: Creating deployment "test-rolling-update-deployment"
Feb 26 13:02:21.409: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 26 13:02:21.426: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 26 13:02:23.682: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 26 13:02:23.704: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 13:02:25.727: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 13:02:28.457: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 13:02:30.434: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 13:02:31.719: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63718318941, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 26 13:02:33.801: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 26 13:02:33.854: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-7bk6n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7bk6n/deployments/test-rolling-update-deployment,UID:39aa5210-5898-11ea-a994-fa163e34d433,ResourceVersion:22984841,Generation:1,CreationTimestamp:2020-02-26 13:02:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-26 13:02:21 +0000 UTC 2020-02-26 13:02:21 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-26 13:02:33 +0000 UTC 2020-02-26 13:02:21 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 26 13:02:33.893: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-7bk6n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7bk6n/replicasets/test-rolling-update-deployment-75db98fb4c,UID:39b316c6-5898-11ea-a994-fa163e34d433,ResourceVersion:22984831,Generation:1,CreationTimestamp:2020-02-26 13:02:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 39aa5210-5898-11ea-a994-fa163e34d433 0xc00267f537 0xc00267f538}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 26 13:02:33.893: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 26 13:02:33.894: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-7bk6n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7bk6n/replicasets/test-rolling-update-controller,UID:3303777f-5898-11ea-a994-fa163e34d433,ResourceVersion:22984840,Generation:2,CreationTimestamp:2020-02-26 13:02:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 39aa5210-5898-11ea-a994-fa163e34d433 0xc00267f337 0xc00267f338}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 26 13:02:34.015: INFO: Pod "test-rolling-update-deployment-75db98fb4c-8gj5l" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-8gj5l,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-7bk6n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7bk6n/pods/test-rolling-update-deployment-75db98fb4c-8gj5l,UID:39b58c59-5898-11ea-a994-fa163e34d433,ResourceVersion:22984830,Generation:0,CreationTimestamp:2020-02-26 13:02:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 39b316c6-5898-11ea-a994-fa163e34d433 0xc0026727e7 0xc0026727e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pblhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pblhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-pblhj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002672850} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002672870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 13:02:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 13:02:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 13:02:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 13:02:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-26 13:02:21 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-26 13:02:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://dc4203de8fbffaa690f5633fd5a0e4ecdcebd2f970f10eca0a8dd9bb8461c17c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:02:34.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-7bk6n" for this suite.
Feb 26 13:02:42.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:02:42.566: INFO: namespace: e2e-tests-deployment-7bk6n, resource: bindings, ignored listing per whitelist
Feb 26 13:02:43.028: INFO: namespace e2e-tests-deployment-7bk6n deletion completed in 8.960896107s

• [SLOW TEST:33.045 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:02:43.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-46da0a07-5898-11ea-8134-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 26 13:02:43.608: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-46e23084-5898-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-bvwd5" to be "success or failure"
Feb 26 13:02:43.851: INFO: Pod "pod-projected-configmaps-46e23084-5898-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 242.561181ms
Feb 26 13:02:47.038: INFO: Pod "pod-projected-configmaps-46e23084-5898-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.429390557s
Feb 26 13:02:49.057: INFO: Pod "pod-projected-configmaps-46e23084-5898-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.448768322s
Feb 26 13:02:51.266: INFO: Pod "pod-projected-configmaps-46e23084-5898-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.658004085s
Feb 26 13:02:53.279: INFO: Pod "pod-projected-configmaps-46e23084-5898-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.670572179s
Feb 26 13:02:55.695: INFO: Pod "pod-projected-configmaps-46e23084-5898-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.087185833s
Feb 26 13:02:58.569: INFO: Pod "pod-projected-configmaps-46e23084-5898-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.960965826s
Feb 26 13:03:01.077: INFO: Pod "pod-projected-configmaps-46e23084-5898-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 17.46846693s
Feb 26 13:03:04.243: INFO: Pod "pod-projected-configmaps-46e23084-5898-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 20.634830683s
Feb 26 13:03:06.264: INFO: Pod "pod-projected-configmaps-46e23084-5898-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.65529272s
STEP: Saw pod success
Feb 26 13:03:06.264: INFO: Pod "pod-projected-configmaps-46e23084-5898-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 13:03:06.279: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-46e23084-5898-11ea-8134-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 26 13:03:06.821: INFO: Waiting for pod pod-projected-configmaps-46e23084-5898-11ea-8134-0242ac110008 to disappear
Feb 26 13:03:06.847: INFO: Pod pod-projected-configmaps-46e23084-5898-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:03:06.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bvwd5" for this suite.
Feb 26 13:03:15.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:03:15.056: INFO: namespace: e2e-tests-projected-bvwd5, resource: bindings, ignored listing per whitelist
Feb 26 13:03:15.166: INFO: namespace e2e-tests-projected-bvwd5 deletion completed in 8.270752027s

• [SLOW TEST:32.138 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:03:15.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb 26 13:03:15.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-g58dp'
Feb 26 13:03:20.028: INFO: stderr: ""
Feb 26 13:03:20.028: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 26 13:03:20.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-g58dp'
Feb 26 13:03:20.173: INFO: stderr: ""
Feb 26 13:03:20.174: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Feb 26 13:03:25.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-g58dp'
Feb 26 13:03:26.909: INFO: stderr: ""
Feb 26 13:03:26.909: INFO: stdout: "update-demo-nautilus-gjd5d update-demo-nautilus-ksrlb "
Feb 26 13:03:26.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gjd5d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g58dp'
Feb 26 13:03:27.197: INFO: stderr: ""
Feb 26 13:03:27.197: INFO: stdout: ""
Feb 26 13:03:27.197: INFO: update-demo-nautilus-gjd5d is created but not running
Feb 26 13:03:32.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-g58dp'
Feb 26 13:03:33.068: INFO: stderr: ""
Feb 26 13:03:33.068: INFO: stdout: "update-demo-nautilus-gjd5d update-demo-nautilus-ksrlb "
Feb 26 13:03:33.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gjd5d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g58dp'
Feb 26 13:03:33.323: INFO: stderr: ""
Feb 26 13:03:33.323: INFO: stdout: ""
Feb 26 13:03:33.324: INFO: update-demo-nautilus-gjd5d is created but not running
Feb 26 13:03:38.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-g58dp'
Feb 26 13:03:38.468: INFO: stderr: ""
Feb 26 13:03:38.469: INFO: stdout: "update-demo-nautilus-gjd5d update-demo-nautilus-ksrlb "
Feb 26 13:03:38.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gjd5d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g58dp'
Feb 26 13:03:38.635: INFO: stderr: ""
Feb 26 13:03:38.636: INFO: stdout: "true"
Feb 26 13:03:38.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gjd5d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g58dp'
Feb 26 13:03:38.801: INFO: stderr: ""
Feb 26 13:03:38.801: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 26 13:03:38.802: INFO: validating pod update-demo-nautilus-gjd5d
Feb 26 13:03:38.913: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 26 13:03:38.913: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 26 13:03:38.913: INFO: update-demo-nautilus-gjd5d is verified up and running
Feb 26 13:03:38.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ksrlb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g58dp'
Feb 26 13:03:39.056: INFO: stderr: ""
Feb 26 13:03:39.057: INFO: stdout: "true"
Feb 26 13:03:39.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ksrlb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g58dp'
Feb 26 13:03:39.151: INFO: stderr: ""
Feb 26 13:03:39.151: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 26 13:03:39.151: INFO: validating pod update-demo-nautilus-ksrlb
Feb 26 13:03:39.165: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 26 13:03:39.165: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 26 13:03:39.165: INFO: update-demo-nautilus-ksrlb is verified up and running
STEP: using delete to clean up resources
Feb 26 13:03:39.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-g58dp'
Feb 26 13:03:39.490: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 26 13:03:39.490: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 26 13:03:39.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-g58dp'
Feb 26 13:03:39.741: INFO: stderr: "No resources found.\n"
Feb 26 13:03:39.742: INFO: stdout: ""
Feb 26 13:03:39.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-g58dp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 26 13:03:40.034: INFO: stderr: ""
Feb 26 13:03:40.034: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:03:40.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-g58dp" for this suite.
Feb 26 13:04:08.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:04:08.322: INFO: namespace: e2e-tests-kubectl-g58dp, resource: bindings, ignored listing per whitelist
Feb 26 13:04:08.326: INFO: namespace e2e-tests-kubectl-g58dp deletion completed in 28.246058942s

• [SLOW TEST:53.159 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:04:08.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-7tnlh
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Feb 26 13:04:08.654: INFO: Found 0 stateful pods, waiting for 3
Feb 26 13:04:18.679: INFO: Found 1 stateful pods, waiting for 3
Feb 26 13:04:28.880: INFO: Found 2 stateful pods, waiting for 3
Feb 26 13:04:40.678: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 13:04:40.678: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 13:04:40.678: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 26 13:04:48.680: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 13:04:48.681: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 13:04:48.681: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 26 13:04:48.750: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 26 13:04:58.938: INFO: Updating stateful set ss2
Feb 26 13:04:58.960: INFO: Waiting for Pod e2e-tests-statefulset-7tnlh/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 26 13:05:08.996: INFO: Waiting for Pod e2e-tests-statefulset-7tnlh/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb 26 13:05:19.488: INFO: Found 2 stateful pods, waiting for 3
Feb 26 13:05:29.609: INFO: Found 2 stateful pods, waiting for 3
Feb 26 13:05:39.516: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 13:05:39.517: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 13:05:39.517: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 26 13:05:49.508: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 13:05:49.508: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 26 13:05:49.508: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 26 13:05:49.559: INFO: Updating stateful set ss2
Feb 26 13:05:49.573: INFO: Waiting for Pod e2e-tests-statefulset-7tnlh/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 26 13:05:59.619: INFO: Updating stateful set ss2
Feb 26 13:05:59.792: INFO: Waiting for StatefulSet e2e-tests-statefulset-7tnlh/ss2 to complete update
Feb 26 13:05:59.792: INFO: Waiting for Pod e2e-tests-statefulset-7tnlh/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 26 13:06:10.506: INFO: Waiting for StatefulSet e2e-tests-statefulset-7tnlh/ss2 to complete update
Feb 26 13:06:10.506: INFO: Waiting for Pod e2e-tests-statefulset-7tnlh/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 26 13:06:19.877: INFO: Waiting for StatefulSet e2e-tests-statefulset-7tnlh/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 26 13:06:29.826: INFO: Deleting all statefulset in ns e2e-tests-statefulset-7tnlh
Feb 26 13:06:29.832: INFO: Scaling statefulset ss2 to 0
Feb 26 13:07:09.923: INFO: Waiting for statefulset status.replicas updated to 0
Feb 26 13:07:09.937: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:07:10.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-7tnlh" for this suite.
Feb 26 13:07:18.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:07:18.971: INFO: namespace: e2e-tests-statefulset-7tnlh, resource: bindings, ignored listing per whitelist
Feb 26 13:07:18.993: INFO: namespace e2e-tests-statefulset-7tnlh deletion completed in 8.716027941s

• [SLOW TEST:190.665 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:07:18.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 26 13:07:19.124: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 26 13:07:19.136: INFO: Waiting for terminating namespaces to be deleted...
Feb 26 13:07:19.187: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 26 13:07:19.206: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 26 13:07:19.206: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 26 13:07:19.206: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 26 13:07:19.206: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 26 13:07:19.206: INFO: 	Container coredns ready: true, restart count 0
Feb 26 13:07:19.206: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 26 13:07:19.206: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 26 13:07:19.206: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 26 13:07:19.206: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 26 13:07:19.206: INFO: 	Container weave ready: true, restart count 0
Feb 26 13:07:19.206: INFO: 	Container weave-npc ready: true, restart count 0
Feb 26 13:07:19.206: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 26 13:07:19.206: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f6f5ac409982a5], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:07:20.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-t2c2m" for this suite.
Feb 26 13:07:26.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:07:26.499: INFO: namespace: e2e-tests-sched-pred-t2c2m, resource: bindings, ignored listing per whitelist
Feb 26 13:07:26.659: INFO: namespace e2e-tests-sched-pred-t2c2m deletion completed in 6.399237391s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.666 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:07:26.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 26 13:07:27.166: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 32.742119ms)
Feb 26 13:07:27.227: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 60.694865ms)
Feb 26 13:07:27.241: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.675446ms)
Feb 26 13:07:27.247: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.095388ms)
Feb 26 13:07:27.260: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.750383ms)
Feb 26 13:07:27.264: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.470188ms)
Feb 26 13:07:27.268: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.3129ms)
Feb 26 13:07:27.272: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.104574ms)
Feb 26 13:07:27.275: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.532994ms)
Feb 26 13:07:27.279: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.969806ms)
Feb 26 13:07:27.284: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.602213ms)
Feb 26 13:07:27.289: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.969929ms)
Feb 26 13:07:27.294: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.776161ms)
Feb 26 13:07:27.298: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.552546ms)
Feb 26 13:07:27.302: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.679899ms)
Feb 26 13:07:27.306: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.407741ms)
Feb 26 13:07:27.310: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.082719ms)
Feb 26 13:07:27.316: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.524843ms)
Feb 26 13:07:27.320: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.416414ms)
Feb 26 13:07:27.373: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 52.721044ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:07:27.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-dzh48" for this suite.
Feb 26 13:07:33.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:07:33.512: INFO: namespace: e2e-tests-proxy-dzh48, resource: bindings, ignored listing per whitelist
Feb 26 13:07:33.613: INFO: namespace e2e-tests-proxy-dzh48 deletion completed in 6.234995411s

• [SLOW TEST:6.951 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:07:33.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 26 13:08:10.340: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8qlz PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 26 13:08:10.340: INFO: >>> kubeConfig: /root/.kube/config
I0226 13:08:10.646372       9 log.go:172] (0xc0020d42c0) (0xc000fece60) Create stream
I0226 13:08:10.646786       9 log.go:172] (0xc0020d42c0) (0xc000fece60) Stream added, broadcasting: 1
I0226 13:08:10.665269       9 log.go:172] (0xc0020d42c0) Reply frame received for 1
I0226 13:08:10.665496       9 log.go:172] (0xc0020d42c0) (0xc000fed0e0) Create stream
I0226 13:08:10.665529       9 log.go:172] (0xc0020d42c0) (0xc000fed0e0) Stream added, broadcasting: 3
I0226 13:08:10.667182       9 log.go:172] (0xc0020d42c0) Reply frame received for 3
I0226 13:08:10.667227       9 log.go:172] (0xc0020d42c0) (0xc000fed180) Create stream
I0226 13:08:10.667236       9 log.go:172] (0xc0020d42c0) (0xc000fed180) Stream added, broadcasting: 5
I0226 13:08:10.668324       9 log.go:172] (0xc0020d42c0) Reply frame received for 5
I0226 13:08:10.859328       9 log.go:172] (0xc0020d42c0) Data frame received for 3
I0226 13:08:10.859509       9 log.go:172] (0xc000fed0e0) (3) Data frame handling
I0226 13:08:10.859540       9 log.go:172] (0xc000fed0e0) (3) Data frame sent
I0226 13:08:11.090356       9 log.go:172] (0xc0020d42c0) Data frame received for 1
I0226 13:08:11.090483       9 log.go:172] (0xc0020d42c0) (0xc000fed0e0) Stream removed, broadcasting: 3
I0226 13:08:11.090745       9 log.go:172] (0xc000fece60) (1) Data frame handling
I0226 13:08:11.090799       9 log.go:172] (0xc000fece60) (1) Data frame sent
I0226 13:08:11.090849       9 log.go:172] (0xc0020d42c0) (0xc000fed180) Stream removed, broadcasting: 5
I0226 13:08:11.090915       9 log.go:172] (0xc0020d42c0) (0xc000fece60) Stream removed, broadcasting: 1
I0226 13:08:11.090924       9 log.go:172] (0xc0020d42c0) Go away received
I0226 13:08:11.091446       9 log.go:172] (0xc0020d42c0) (0xc000fece60) Stream removed, broadcasting: 1
I0226 13:08:11.091470       9 log.go:172] (0xc0020d42c0) (0xc000fed0e0) Stream removed, broadcasting: 3
I0226 13:08:11.091484       9 log.go:172] (0xc0020d42c0) (0xc000fed180) Stream removed, broadcasting: 5
Feb 26 13:08:11.091: INFO: Exec stderr: ""
Feb 26 13:08:11.091: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8qlz PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 26 13:08:11.091: INFO: >>> kubeConfig: /root/.kube/config
I0226 13:08:11.241781       9 log.go:172] (0xc001e122c0) (0xc000da1f40) Create stream
I0226 13:08:11.242056       9 log.go:172] (0xc001e122c0) (0xc000da1f40) Stream added, broadcasting: 1
I0226 13:08:11.248836       9 log.go:172] (0xc001e122c0) Reply frame received for 1
I0226 13:08:11.248884       9 log.go:172] (0xc001e122c0) (0xc000bbc3c0) Create stream
I0226 13:08:11.248897       9 log.go:172] (0xc001e122c0) (0xc000bbc3c0) Stream added, broadcasting: 3
I0226 13:08:11.249933       9 log.go:172] (0xc001e122c0) Reply frame received for 3
I0226 13:08:11.249966       9 log.go:172] (0xc001e122c0) (0xc00266f0e0) Create stream
I0226 13:08:11.249994       9 log.go:172] (0xc001e122c0) (0xc00266f0e0) Stream added, broadcasting: 5
I0226 13:08:11.251059       9 log.go:172] (0xc001e122c0) Reply frame received for 5
I0226 13:08:11.432069       9 log.go:172] (0xc001e122c0) Data frame received for 3
I0226 13:08:11.432158       9 log.go:172] (0xc000bbc3c0) (3) Data frame handling
I0226 13:08:11.432189       9 log.go:172] (0xc000bbc3c0) (3) Data frame sent
I0226 13:08:11.535360       9 log.go:172] (0xc001e122c0) (0xc00266f0e0) Stream removed, broadcasting: 5
I0226 13:08:11.535490       9 log.go:172] (0xc001e122c0) Data frame received for 1
I0226 13:08:11.535532       9 log.go:172] (0xc001e122c0) (0xc000bbc3c0) Stream removed, broadcasting: 3
I0226 13:08:11.535580       9 log.go:172] (0xc000da1f40) (1) Data frame handling
I0226 13:08:11.535618       9 log.go:172] (0xc000da1f40) (1) Data frame sent
I0226 13:08:11.535633       9 log.go:172] (0xc001e122c0) (0xc000da1f40) Stream removed, broadcasting: 1
I0226 13:08:11.535652       9 log.go:172] (0xc001e122c0) Go away received
I0226 13:08:11.535893       9 log.go:172] (0xc001e122c0) (0xc000da1f40) Stream removed, broadcasting: 1
I0226 13:08:11.535909       9 log.go:172] (0xc001e122c0) (0xc000bbc3c0) Stream removed, broadcasting: 3
I0226 13:08:11.535924       9 log.go:172] (0xc001e122c0) (0xc00266f0e0) Stream removed, broadcasting: 5
Feb 26 13:08:11.535: INFO: Exec stderr: ""
Feb 26 13:08:11.536: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8qlz PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 26 13:08:11.536: INFO: >>> kubeConfig: /root/.kube/config
I0226 13:08:11.648151       9 log.go:172] (0xc001e12580) (0xc001bf0000) Create stream
I0226 13:08:11.648253       9 log.go:172] (0xc001e12580) (0xc001bf0000) Stream added, broadcasting: 1
I0226 13:08:11.653136       9 log.go:172] (0xc001e12580) Reply frame received for 1
I0226 13:08:11.653167       9 log.go:172] (0xc001e12580) (0xc001f34dc0) Create stream
I0226 13:08:11.653181       9 log.go:172] (0xc001e12580) (0xc001f34dc0) Stream added, broadcasting: 3
I0226 13:08:11.654261       9 log.go:172] (0xc001e12580) Reply frame received for 3
I0226 13:08:11.654286       9 log.go:172] (0xc001e12580) (0xc00266f180) Create stream
I0226 13:08:11.654295       9 log.go:172] (0xc001e12580) (0xc00266f180) Stream added, broadcasting: 5
I0226 13:08:11.655513       9 log.go:172] (0xc001e12580) Reply frame received for 5
I0226 13:08:11.778591       9 log.go:172] (0xc001e12580) Data frame received for 3
I0226 13:08:11.778686       9 log.go:172] (0xc001f34dc0) (3) Data frame handling
I0226 13:08:11.778733       9 log.go:172] (0xc001f34dc0) (3) Data frame sent
I0226 13:08:11.909787       9 log.go:172] (0xc001e12580) (0xc001f34dc0) Stream removed, broadcasting: 3
I0226 13:08:11.909996       9 log.go:172] (0xc001e12580) Data frame received for 1
I0226 13:08:11.910033       9 log.go:172] (0xc001e12580) (0xc00266f180) Stream removed, broadcasting: 5
I0226 13:08:11.910063       9 log.go:172] (0xc001bf0000) (1) Data frame handling
I0226 13:08:11.910099       9 log.go:172] (0xc001bf0000) (1) Data frame sent
I0226 13:08:11.910135       9 log.go:172] (0xc001e12580) (0xc001bf0000) Stream removed, broadcasting: 1
I0226 13:08:11.910162       9 log.go:172] (0xc001e12580) Go away received
I0226 13:08:11.910441       9 log.go:172] (0xc001e12580) (0xc001bf0000) Stream removed, broadcasting: 1
I0226 13:08:11.910463       9 log.go:172] (0xc001e12580) (0xc001f34dc0) Stream removed, broadcasting: 3
I0226 13:08:11.910476       9 log.go:172] (0xc001e12580) (0xc00266f180) Stream removed, broadcasting: 5
Feb 26 13:08:11.910: INFO: Exec stderr: ""
Feb 26 13:08:11.910: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8qlz PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 26 13:08:11.910: INFO: >>> kubeConfig: /root/.kube/config
I0226 13:08:11.997546       9 log.go:172] (0xc0020d49a0) (0xc000fed5e0) Create stream
I0226 13:08:11.997690       9 log.go:172] (0xc0020d49a0) (0xc000fed5e0) Stream added, broadcasting: 1
I0226 13:08:12.003149       9 log.go:172] (0xc0020d49a0) Reply frame received for 1
I0226 13:08:12.003227       9 log.go:172] (0xc0020d49a0) (0xc000bbc500) Create stream
I0226 13:08:12.003240       9 log.go:172] (0xc0020d49a0) (0xc000bbc500) Stream added, broadcasting: 3
I0226 13:08:12.004671       9 log.go:172] (0xc0020d49a0) Reply frame received for 3
I0226 13:08:12.004721       9 log.go:172] (0xc0020d49a0) (0xc00266f220) Create stream
I0226 13:08:12.004748       9 log.go:172] (0xc0020d49a0) (0xc00266f220) Stream added, broadcasting: 5
I0226 13:08:12.006294       9 log.go:172] (0xc0020d49a0) Reply frame received for 5
I0226 13:08:12.129318       9 log.go:172] (0xc0020d49a0) Data frame received for 3
I0226 13:08:12.129482       9 log.go:172] (0xc000bbc500) (3) Data frame handling
I0226 13:08:12.129540       9 log.go:172] (0xc000bbc500) (3) Data frame sent
I0226 13:08:12.283822       9 log.go:172] (0xc0020d49a0) Data frame received for 1
I0226 13:08:12.284005       9 log.go:172] (0xc000fed5e0) (1) Data frame handling
I0226 13:08:12.284093       9 log.go:172] (0xc000fed5e0) (1) Data frame sent
I0226 13:08:12.284125       9 log.go:172] (0xc0020d49a0) (0xc000fed5e0) Stream removed, broadcasting: 1
I0226 13:08:12.284405       9 log.go:172] (0xc0020d49a0) (0xc000bbc500) Stream removed, broadcasting: 3
I0226 13:08:12.284584       9 log.go:172] (0xc0020d49a0) (0xc00266f220) Stream removed, broadcasting: 5
I0226 13:08:12.284618       9 log.go:172] (0xc0020d49a0) Go away received
I0226 13:08:12.284645       9 log.go:172] (0xc0020d49a0) (0xc000fed5e0) Stream removed, broadcasting: 1
I0226 13:08:12.284657       9 log.go:172] (0xc0020d49a0) (0xc000bbc500) Stream removed, broadcasting: 3
I0226 13:08:12.284664       9 log.go:172] (0xc0020d49a0) (0xc00266f220) Stream removed, broadcasting: 5
Feb 26 13:08:12.284: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 26 13:08:12.284: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8qlz PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 26 13:08:12.284: INFO: >>> kubeConfig: /root/.kube/config
I0226 13:08:12.515985       9 log.go:172] (0xc000d35970) (0xc0021aaf00) Create stream
I0226 13:08:12.516279       9 log.go:172] (0xc000d35970) (0xc0021aaf00) Stream added, broadcasting: 1
I0226 13:08:12.550281       9 log.go:172] (0xc000d35970) Reply frame received for 1
I0226 13:08:12.550611       9 log.go:172] (0xc000d35970) (0xc00266f400) Create stream
I0226 13:08:12.550683       9 log.go:172] (0xc000d35970) (0xc00266f400) Stream added, broadcasting: 3
I0226 13:08:12.554167       9 log.go:172] (0xc000d35970) Reply frame received for 3
I0226 13:08:12.554214       9 log.go:172] (0xc000d35970) (0xc000bbc640) Create stream
I0226 13:08:12.554235       9 log.go:172] (0xc000d35970) (0xc000bbc640) Stream added, broadcasting: 5
I0226 13:08:12.557852       9 log.go:172] (0xc000d35970) Reply frame received for 5
I0226 13:08:12.905986       9 log.go:172] (0xc000d35970) Data frame received for 3
I0226 13:08:12.906381       9 log.go:172] (0xc00266f400) (3) Data frame handling
I0226 13:08:12.906478       9 log.go:172] (0xc00266f400) (3) Data frame sent
I0226 13:08:13.012930       9 log.go:172] (0xc000d35970) (0xc000bbc640) Stream removed, broadcasting: 5
I0226 13:08:13.013231       9 log.go:172] (0xc000d35970) Data frame received for 1
I0226 13:08:13.013259       9 log.go:172] (0xc0021aaf00) (1) Data frame handling
I0226 13:08:13.013293       9 log.go:172] (0xc0021aaf00) (1) Data frame sent
I0226 13:08:13.013336       9 log.go:172] (0xc000d35970) (0xc0021aaf00) Stream removed, broadcasting: 1
I0226 13:08:13.013618       9 log.go:172] (0xc000d35970) (0xc00266f400) Stream removed, broadcasting: 3
I0226 13:08:13.013647       9 log.go:172] (0xc000d35970) Go away received
I0226 13:08:13.014159       9 log.go:172] (0xc000d35970) (0xc0021aaf00) Stream removed, broadcasting: 1
I0226 13:08:13.014470       9 log.go:172] (0xc000d35970) (0xc00266f400) Stream removed, broadcasting: 3
I0226 13:08:13.014482       9 log.go:172] (0xc000d35970) (0xc000bbc640) Stream removed, broadcasting: 5
Feb 26 13:08:13.014: INFO: Exec stderr: ""
Feb 26 13:08:13.014: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8qlz PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 26 13:08:13.014: INFO: >>> kubeConfig: /root/.kube/config
I0226 13:08:13.084276       9 log.go:172] (0xc001bc02c0) (0xc000bbcb40) Create stream
I0226 13:08:13.084534       9 log.go:172] (0xc001bc02c0) (0xc000bbcb40) Stream added, broadcasting: 1
I0226 13:08:13.093371       9 log.go:172] (0xc001bc02c0) Reply frame received for 1
I0226 13:08:13.093499       9 log.go:172] (0xc001bc02c0) (0xc000fed680) Create stream
I0226 13:08:13.093515       9 log.go:172] (0xc001bc02c0) (0xc000fed680) Stream added, broadcasting: 3
I0226 13:08:13.095999       9 log.go:172] (0xc001bc02c0) Reply frame received for 3
I0226 13:08:13.096036       9 log.go:172] (0xc001bc02c0) (0xc0021aafa0) Create stream
I0226 13:08:13.096049       9 log.go:172] (0xc001bc02c0) (0xc0021aafa0) Stream added, broadcasting: 5
I0226 13:08:13.097780       9 log.go:172] (0xc001bc02c0) Reply frame received for 5
I0226 13:08:13.257144       9 log.go:172] (0xc001bc02c0) Data frame received for 3
I0226 13:08:13.257331       9 log.go:172] (0xc000fed680) (3) Data frame handling
I0226 13:08:13.257377       9 log.go:172] (0xc000fed680) (3) Data frame sent
I0226 13:08:13.365270       9 log.go:172] (0xc001bc02c0) (0xc000fed680) Stream removed, broadcasting: 3
I0226 13:08:13.365491       9 log.go:172] (0xc001bc02c0) (0xc0021aafa0) Stream removed, broadcasting: 5
I0226 13:08:13.365517       9 log.go:172] (0xc001bc02c0) Data frame received for 1
I0226 13:08:13.365547       9 log.go:172] (0xc000bbcb40) (1) Data frame handling
I0226 13:08:13.365566       9 log.go:172] (0xc000bbcb40) (1) Data frame sent
I0226 13:08:13.365593       9 log.go:172] (0xc001bc02c0) (0xc000bbcb40) Stream removed, broadcasting: 1
I0226 13:08:13.365617       9 log.go:172] (0xc001bc02c0) Go away received
I0226 13:08:13.365838       9 log.go:172] (0xc001bc02c0) (0xc000bbcb40) Stream removed, broadcasting: 1
I0226 13:08:13.365851       9 log.go:172] (0xc001bc02c0) (0xc000fed680) Stream removed, broadcasting: 3
I0226 13:08:13.365855       9 log.go:172] (0xc001bc02c0) (0xc0021aafa0) Stream removed, broadcasting: 5
Feb 26 13:08:13.365: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 26 13:08:13.366: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8qlz PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 26 13:08:13.366: INFO: >>> kubeConfig: /root/.kube/config
I0226 13:08:13.444282       9 log.go:172] (0xc0000dcf20) (0xc00266f7c0) Create stream
I0226 13:08:13.444419       9 log.go:172] (0xc0000dcf20) (0xc00266f7c0) Stream added, broadcasting: 1
I0226 13:08:13.471835       9 log.go:172] (0xc0000dcf20) Reply frame received for 1
I0226 13:08:13.471956       9 log.go:172] (0xc0000dcf20) (0xc00298c000) Create stream
I0226 13:08:13.471969       9 log.go:172] (0xc0000dcf20) (0xc00298c000) Stream added, broadcasting: 3
I0226 13:08:13.478124       9 log.go:172] (0xc0000dcf20) Reply frame received for 3
I0226 13:08:13.478322       9 log.go:172] (0xc0000dcf20) (0xc002986000) Create stream
I0226 13:08:13.478348       9 log.go:172] (0xc0000dcf20) (0xc002986000) Stream added, broadcasting: 5
I0226 13:08:13.480059       9 log.go:172] (0xc0000dcf20) Reply frame received for 5
I0226 13:08:13.630298       9 log.go:172] (0xc0000dcf20) Data frame received for 3
I0226 13:08:13.630397       9 log.go:172] (0xc00298c000) (3) Data frame handling
I0226 13:08:13.630434       9 log.go:172] (0xc00298c000) (3) Data frame sent
I0226 13:08:13.751894       9 log.go:172] (0xc0000dcf20) Data frame received for 1
I0226 13:08:13.752150       9 log.go:172] (0xc0000dcf20) (0xc00298c000) Stream removed, broadcasting: 3
I0226 13:08:13.752303       9 log.go:172] (0xc00266f7c0) (1) Data frame handling
I0226 13:08:13.752355       9 log.go:172] (0xc00266f7c0) (1) Data frame sent
I0226 13:08:13.752433       9 log.go:172] (0xc0000dcf20) (0xc002986000) Stream removed, broadcasting: 5
I0226 13:08:13.752499       9 log.go:172] (0xc0000dcf20) (0xc00266f7c0) Stream removed, broadcasting: 1
I0226 13:08:13.752547       9 log.go:172] (0xc0000dcf20) Go away received
I0226 13:08:13.753196       9 log.go:172] (0xc0000dcf20) (0xc00266f7c0) Stream removed, broadcasting: 1
I0226 13:08:13.753213       9 log.go:172] (0xc0000dcf20) (0xc00298c000) Stream removed, broadcasting: 3
I0226 13:08:13.753228       9 log.go:172] (0xc0000dcf20) (0xc002986000) Stream removed, broadcasting: 5
Feb 26 13:08:13.753: INFO: Exec stderr: ""
Feb 26 13:08:13.753: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8qlz PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 26 13:08:13.753: INFO: >>> kubeConfig: /root/.kube/config
I0226 13:08:13.967576       9 log.go:172] (0xc0000dcd10) (0xc001fe01e0) Create stream
I0226 13:08:13.968116       9 log.go:172] (0xc0000dcd10) (0xc001fe01e0) Stream added, broadcasting: 1
I0226 13:08:13.983308       9 log.go:172] (0xc0000dcd10) Reply frame received for 1
I0226 13:08:13.983435       9 log.go:172] (0xc0000dcd10) (0xc00298c0a0) Create stream
I0226 13:08:13.983450       9 log.go:172] (0xc0000dcd10) (0xc00298c0a0) Stream added, broadcasting: 3
I0226 13:08:13.986510       9 log.go:172] (0xc0000dcd10) Reply frame received for 3
I0226 13:08:13.986655       9 log.go:172] (0xc0000dcd10) (0xc001fe0280) Create stream
I0226 13:08:13.986681       9 log.go:172] (0xc0000dcd10) (0xc001fe0280) Stream added, broadcasting: 5
I0226 13:08:13.990040       9 log.go:172] (0xc0000dcd10) Reply frame received for 5
I0226 13:08:14.328876       9 log.go:172] (0xc0000dcd10) Data frame received for 3
I0226 13:08:14.329009       9 log.go:172] (0xc00298c0a0) (3) Data frame handling
I0226 13:08:14.329029       9 log.go:172] (0xc00298c0a0) (3) Data frame sent
I0226 13:08:14.521064       9 log.go:172] (0xc0000dcd10) Data frame received for 1
I0226 13:08:14.521455       9 log.go:172] (0xc0000dcd10) (0xc001fe0280) Stream removed, broadcasting: 5
I0226 13:08:14.521562       9 log.go:172] (0xc001fe01e0) (1) Data frame handling
I0226 13:08:14.521694       9 log.go:172] (0xc001fe01e0) (1) Data frame sent
I0226 13:08:14.521953       9 log.go:172] (0xc0000dcd10) (0xc00298c0a0) Stream removed, broadcasting: 3
I0226 13:08:14.522048       9 log.go:172] (0xc0000dcd10) (0xc001fe01e0) Stream removed, broadcasting: 1
I0226 13:08:14.522468       9 log.go:172] (0xc0000dcd10) (0xc001fe01e0) Stream removed, broadcasting: 1
I0226 13:08:14.522522       9 log.go:172] (0xc0000dcd10) (0xc00298c0a0) Stream removed, broadcasting: 3
I0226 13:08:14.522624       9 log.go:172] (0xc0000dcd10) (0xc001fe0280) Stream removed, broadcasting: 5
Feb 26 13:08:14.523: INFO: Exec stderr: ""
Feb 26 13:08:14.524: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8qlz PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 26 13:08:14.524: INFO: >>> kubeConfig: /root/.kube/config
I0226 13:08:14.524891       9 log.go:172] (0xc0000dcd10) Go away received
I0226 13:08:14.638425       9 log.go:172] (0xc0009c76b0) (0xc002986280) Create stream
I0226 13:08:14.638738       9 log.go:172] (0xc0009c76b0) (0xc002986280) Stream added, broadcasting: 1
I0226 13:08:14.647111       9 log.go:172] (0xc0009c76b0) Reply frame received for 1
I0226 13:08:14.647179       9 log.go:172] (0xc0009c76b0) (0xc001fe03c0) Create stream
I0226 13:08:14.647189       9 log.go:172] (0xc0009c76b0) (0xc001fe03c0) Stream added, broadcasting: 3
I0226 13:08:14.648906       9 log.go:172] (0xc0009c76b0) Reply frame received for 3
I0226 13:08:14.648929       9 log.go:172] (0xc0009c76b0) (0xc00298c140) Create stream
I0226 13:08:14.648936       9 log.go:172] (0xc0009c76b0) (0xc00298c140) Stream added, broadcasting: 5
I0226 13:08:14.650190       9 log.go:172] (0xc0009c76b0) Reply frame received for 5
I0226 13:08:14.794753       9 log.go:172] (0xc0009c76b0) Data frame received for 3
I0226 13:08:14.794896       9 log.go:172] (0xc001fe03c0) (3) Data frame handling
I0226 13:08:14.794928       9 log.go:172] (0xc001fe03c0) (3) Data frame sent
I0226 13:08:14.933575       9 log.go:172] (0xc0009c76b0) (0xc00298c140) Stream removed, broadcasting: 5
I0226 13:08:14.933755       9 log.go:172] (0xc0009c76b0) Data frame received for 1
I0226 13:08:14.933781       9 log.go:172] (0xc002986280) (1) Data frame handling
I0226 13:08:14.933832       9 log.go:172] (0xc002986280) (1) Data frame sent
I0226 13:08:14.933854       9 log.go:172] (0xc0009c76b0) (0xc001fe03c0) Stream removed, broadcasting: 3
I0226 13:08:14.933894       9 log.go:172] (0xc0009c76b0) (0xc002986280) Stream removed, broadcasting: 1
I0226 13:08:14.933970       9 log.go:172] (0xc0009c76b0) Go away received
I0226 13:08:14.934194       9 log.go:172] (0xc0009c76b0) (0xc002986280) Stream removed, broadcasting: 1
I0226 13:08:14.934206       9 log.go:172] (0xc0009c76b0) (0xc001fe03c0) Stream removed, broadcasting: 3
I0226 13:08:14.934216       9 log.go:172] (0xc0009c76b0) (0xc00298c140) Stream removed, broadcasting: 5
Feb 26 13:08:14.934: INFO: Exec stderr: ""
Feb 26 13:08:14.934: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8qlz PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 26 13:08:14.934: INFO: >>> kubeConfig: /root/.kube/config
I0226 13:08:15.045150       9 log.go:172] (0xc0009c7b80) (0xc0029865a0) Create stream
I0226 13:08:15.045311       9 log.go:172] (0xc0009c7b80) (0xc0029865a0) Stream added, broadcasting: 1
I0226 13:08:15.049299       9 log.go:172] (0xc0009c7b80) Reply frame received for 1
I0226 13:08:15.049354       9 log.go:172] (0xc0009c7b80) (0xc00298c1e0) Create stream
I0226 13:08:15.049366       9 log.go:172] (0xc0009c7b80) (0xc00298c1e0) Stream added, broadcasting: 3
I0226 13:08:15.050416       9 log.go:172] (0xc0009c7b80) Reply frame received for 3
I0226 13:08:15.050464       9 log.go:172] (0xc0009c7b80) (0xc001fe0460) Create stream
I0226 13:08:15.050476       9 log.go:172] (0xc0009c7b80) (0xc001fe0460) Stream added, broadcasting: 5
I0226 13:08:15.051511       9 log.go:172] (0xc0009c7b80) Reply frame received for 5
I0226 13:08:15.196633       9 log.go:172] (0xc0009c7b80) Data frame received for 3
I0226 13:08:15.196745       9 log.go:172] (0xc00298c1e0) (3) Data frame handling
I0226 13:08:15.196770       9 log.go:172] (0xc00298c1e0) (3) Data frame sent
I0226 13:08:15.319582       9 log.go:172] (0xc0009c7b80) (0xc001fe0460) Stream removed, broadcasting: 5
I0226 13:08:15.319719       9 log.go:172] (0xc0009c7b80) Data frame received for 1
I0226 13:08:15.319755       9 log.go:172] (0xc0009c7b80) (0xc00298c1e0) Stream removed, broadcasting: 3
I0226 13:08:15.319790       9 log.go:172] (0xc0029865a0) (1) Data frame handling
I0226 13:08:15.319815       9 log.go:172] (0xc0029865a0) (1) Data frame sent
I0226 13:08:15.319823       9 log.go:172] (0xc0009c7b80) (0xc0029865a0) Stream removed, broadcasting: 1
I0226 13:08:15.319995       9 log.go:172] (0xc0009c7b80) (0xc0029865a0) Stream removed, broadcasting: 1
I0226 13:08:15.320003       9 log.go:172] (0xc0009c7b80) (0xc00298c1e0) Stream removed, broadcasting: 3
I0226 13:08:15.320011       9 log.go:172] (0xc0009c7b80) (0xc001fe0460) Stream removed, broadcasting: 5
Feb 26 13:08:15.320: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:08:15.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-s8qlz" for this suite.
Feb 26 13:09:23.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:09:23.479: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-s8qlz, resource: bindings, ignored listing per whitelist
Feb 26 13:09:23.688: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-s8qlz deletion completed in 1m8.294668084s

• [SLOW TEST:110.074 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:09:23.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:10:47.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-wf8jf" for this suite.
Feb 26 13:10:53.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:10:53.671: INFO: namespace: e2e-tests-container-runtime-wf8jf, resource: bindings, ignored listing per whitelist
Feb 26 13:10:53.699: INFO: namespace e2e-tests-container-runtime-wf8jf deletion completed in 6.361486252s

• [SLOW TEST:90.011 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:10:53.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-6b39973b-5899-11ea-8134-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 26 13:10:54.081: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6b3bf965-5899-11ea-8134-0242ac110008" in namespace "e2e-tests-projected-m84bz" to be "success or failure"
Feb 26 13:10:54.196: INFO: Pod "pod-projected-secrets-6b3bf965-5899-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 114.230628ms
Feb 26 13:10:56.368: INFO: Pod "pod-projected-secrets-6b3bf965-5899-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286252267s
Feb 26 13:10:58.426: INFO: Pod "pod-projected-secrets-6b3bf965-5899-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344557705s
Feb 26 13:11:01.862: INFO: Pod "pod-projected-secrets-6b3bf965-5899-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.780353508s
Feb 26 13:11:03.941: INFO: Pod "pod-projected-secrets-6b3bf965-5899-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.85970249s
Feb 26 13:11:05.975: INFO: Pod "pod-projected-secrets-6b3bf965-5899-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.89317845s
Feb 26 13:11:08.680: INFO: Pod "pod-projected-secrets-6b3bf965-5899-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.598773005s
STEP: Saw pod success
Feb 26 13:11:08.681: INFO: Pod "pod-projected-secrets-6b3bf965-5899-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 13:11:08.690: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-6b3bf965-5899-11ea-8134-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Feb 26 13:11:09.484: INFO: Waiting for pod pod-projected-secrets-6b3bf965-5899-11ea-8134-0242ac110008 to disappear
Feb 26 13:11:09.564: INFO: Pod pod-projected-secrets-6b3bf965-5899-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:11:09.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-m84bz" for this suite.
Feb 26 13:11:15.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:11:15.677: INFO: namespace: e2e-tests-projected-m84bz, resource: bindings, ignored listing per whitelist
Feb 26 13:11:15.786: INFO: namespace e2e-tests-projected-m84bz deletion completed in 6.211872095s

• [SLOW TEST:22.086 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:11:15.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 26 13:11:16.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-lphbz'
Feb 26 13:11:16.120: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 26 13:11:16.120: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Feb 26 13:11:20.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-lphbz'
Feb 26 13:11:20.399: INFO: stderr: ""
Feb 26 13:11:20.400: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:11:20.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lphbz" for this suite.
Feb 26 13:11:26.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:11:26.524: INFO: namespace: e2e-tests-kubectl-lphbz, resource: bindings, ignored listing per whitelist
Feb 26 13:11:26.731: INFO: namespace e2e-tests-kubectl-lphbz deletion completed in 6.322871025s

• [SLOW TEST:10.945 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:11:26.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 26 13:11:27.014: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 26 13:11:32.338: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 26 13:11:36.741: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 26 13:11:37.014: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-wbvb5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wbvb5/deployments/test-cleanup-deployment,UID:84b15d1e-5899-11ea-a994-fa163e34d433,ResourceVersion:22986032,Generation:1,CreationTimestamp:2020-02-26 13:11:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb 26 13:11:37.023: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Feb 26 13:11:37.023: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb 26 13:11:37.023: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-wbvb5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wbvb5/replicasets/test-cleanup-controller,UID:7edeb846-5899-11ea-a994-fa163e34d433,ResourceVersion:22986034,Generation:1,CreationTimestamp:2020-02-26 13:11:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 84b15d1e-5899-11ea-a994-fa163e34d433 0xc0028b9147 0xc0028b9148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 26 13:11:37.269: INFO: Pod "test-cleanup-controller-tzrrj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-tzrrj,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-wbvb5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wbvb5/pods/test-cleanup-controller-tzrrj,UID:7ee2829d-5899-11ea-a994-fa163e34d433,ResourceVersion:22986030,Generation:0,CreationTimestamp:2020-02-26 13:11:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 7edeb846-5899-11ea-a994-fa163e34d433 0xc0027b6637 0xc0027b6638}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fx9c4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fx9c4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-fx9c4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b66a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b66d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 13:11:27 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 13:11:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 13:11:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-26 13:11:27 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-26 13:11:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-26 13:11:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f95def4749aadcdf3592f201dce147b05ed24757dfe8150c569281c309d14fb2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:11:37.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-wbvb5" for this suite.
Feb 26 13:11:47.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:11:47.711: INFO: namespace: e2e-tests-deployment-wbvb5, resource: bindings, ignored listing per whitelist
Feb 26 13:11:47.776: INFO: namespace e2e-tests-deployment-wbvb5 deletion completed in 10.489727473s

• [SLOW TEST:21.045 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:11:47.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 26 13:11:48.077: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b6be862-5899-11ea-8134-0242ac110008" in namespace "e2e-tests-downward-api-zdvdb" to be "success or failure"
Feb 26 13:11:48.285: INFO: Pod "downwardapi-volume-8b6be862-5899-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 208.109972ms
Feb 26 13:11:50.382: INFO: Pod "downwardapi-volume-8b6be862-5899-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.304814164s
Feb 26 13:11:52.401: INFO: Pod "downwardapi-volume-8b6be862-5899-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323796092s
Feb 26 13:11:54.925: INFO: Pod "downwardapi-volume-8b6be862-5899-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.847979027s
Feb 26 13:11:56.952: INFO: Pod "downwardapi-volume-8b6be862-5899-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.874158029s
Feb 26 13:11:58.972: INFO: Pod "downwardapi-volume-8b6be862-5899-11ea-8134-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 10.894874207s
Feb 26 13:12:00.995: INFO: Pod "downwardapi-volume-8b6be862-5899-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.917708078s
STEP: Saw pod success
Feb 26 13:12:00.995: INFO: Pod "downwardapi-volume-8b6be862-5899-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 13:12:01.011: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8b6be862-5899-11ea-8134-0242ac110008 container client-container: 
STEP: delete the pod
Feb 26 13:12:01.119: INFO: Waiting for pod downwardapi-volume-8b6be862-5899-11ea-8134-0242ac110008 to disappear
Feb 26 13:12:01.127: INFO: Pod downwardapi-volume-8b6be862-5899-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:12:01.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zdvdb" for this suite.
Feb 26 13:12:07.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:12:07.329: INFO: namespace: e2e-tests-downward-api-zdvdb, resource: bindings, ignored listing per whitelist
Feb 26 13:12:07.347: INFO: namespace e2e-tests-downward-api-zdvdb deletion completed in 6.21496107s

• [SLOW TEST:19.570 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:12:07.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 26 13:12:07.615: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 26 13:12:07.651: INFO: Waiting for terminating namespaces to be deleted...
Feb 26 13:12:07.656: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 26 13:12:07.672: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 26 13:12:07.672: INFO: 	Container coredns ready: true, restart count 0
Feb 26 13:12:07.672: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 26 13:12:07.672: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 26 13:12:07.672: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 26 13:12:07.672: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 26 13:12:07.672: INFO: 	Container weave ready: true, restart count 0
Feb 26 13:12:07.672: INFO: 	Container weave-npc ready: true, restart count 0
Feb 26 13:12:07.672: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 26 13:12:07.672: INFO: 	Container coredns ready: true, restart count 0
Feb 26 13:12:07.672: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 26 13:12:07.672: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 26 13:12:07.672: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-9e5a704b-5899-11ea-8134-0242ac110008 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-9e5a704b-5899-11ea-8134-0242ac110008 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-9e5a704b-5899-11ea-8134-0242ac110008
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:12:32.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-p6snh" for this suite.
Feb 26 13:12:48.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:12:48.631: INFO: namespace: e2e-tests-sched-pred-p6snh, resource: bindings, ignored listing per whitelist
Feb 26 13:12:48.723: INFO: namespace e2e-tests-sched-pred-p6snh deletion completed in 16.462033667s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:41.376 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:12:48.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 26 13:12:48.989: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:12:50.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-dz4jx" for this suite.
Feb 26 13:12:56.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:12:56.367: INFO: namespace: e2e-tests-custom-resource-definition-dz4jx, resource: bindings, ignored listing per whitelist
Feb 26 13:12:56.592: INFO: namespace e2e-tests-custom-resource-definition-dz4jx deletion completed in 6.346421768s

• [SLOW TEST:7.868 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:12:56.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Feb 26 13:12:57.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-22spz run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 26 13:13:13.015: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0226 13:13:11.582067    4522 log.go:172] (0xc00087a210) (0xc000882320) Create stream\nI0226 13:13:11.582200    4522 log.go:172] (0xc00087a210) (0xc000882320) Stream added, broadcasting: 1\nI0226 13:13:11.591066    4522 log.go:172] (0xc00087a210) Reply frame received for 1\nI0226 13:13:11.591136    4522 log.go:172] (0xc00087a210) (0xc000708780) Create stream\nI0226 13:13:11.591153    4522 log.go:172] (0xc00087a210) (0xc000708780) Stream added, broadcasting: 3\nI0226 13:13:11.595304    4522 log.go:172] (0xc00087a210) Reply frame received for 3\nI0226 13:13:11.595355    4522 log.go:172] (0xc00087a210) (0xc0008823c0) Create stream\nI0226 13:13:11.595372    4522 log.go:172] (0xc00087a210) (0xc0008823c0) Stream added, broadcasting: 5\nI0226 13:13:11.597753    4522 log.go:172] (0xc00087a210) Reply frame received for 5\nI0226 13:13:11.597862    4522 log.go:172] (0xc00087a210) (0xc0005c0dc0) Create stream\nI0226 13:13:11.597957    4522 log.go:172] (0xc00087a210) (0xc0005c0dc0) Stream added, broadcasting: 7\nI0226 13:13:11.600633    4522 log.go:172] (0xc00087a210) Reply frame received for 7\nI0226 13:13:11.601216    4522 log.go:172] (0xc000708780) (3) Writing data frame\nI0226 13:13:11.601458    4522 log.go:172] (0xc000708780) (3) Writing data frame\nI0226 13:13:11.607761    4522 log.go:172] (0xc00087a210) Data frame received for 5\nI0226 13:13:11.607779    4522 log.go:172] (0xc0008823c0) (5) Data frame handling\nI0226 13:13:11.607804    4522 log.go:172] (0xc0008823c0) (5) Data frame sent\nI0226 13:13:11.617252    4522 log.go:172] (0xc00087a210) Data frame received for 5\nI0226 13:13:11.617270    4522 log.go:172] (0xc0008823c0) (5) Data frame handling\nI0226 13:13:11.617281    4522 log.go:172] (0xc0008823c0) (5) Data frame sent\nI0226 13:13:12.957053    4522 log.go:172] (0xc00087a210) Data frame received for 1\nI0226 13:13:12.957142    4522 log.go:172] (0xc00087a210) (0xc0008823c0) Stream removed, broadcasting: 5\nI0226 13:13:12.957179    4522 log.go:172] (0xc000882320) (1) Data frame handling\nI0226 13:13:12.957203    4522 log.go:172] (0xc00087a210) (0xc000708780) Stream removed, broadcasting: 3\nI0226 13:13:12.957228    4522 log.go:172] (0xc000882320) (1) Data frame sent\nI0226 13:13:12.957247    4522 log.go:172] (0xc00087a210) (0xc000882320) Stream removed, broadcasting: 1\nI0226 13:13:12.957665    4522 log.go:172] (0xc00087a210) (0xc0005c0dc0) Stream removed, broadcasting: 7\nI0226 13:13:12.957707    4522 log.go:172] (0xc00087a210) Go away received\nI0226 13:13:12.957749    4522 log.go:172] (0xc00087a210) (0xc000882320) Stream removed, broadcasting: 1\nI0226 13:13:12.957770    4522 log.go:172] (0xc00087a210) (0xc000708780) Stream removed, broadcasting: 3\nI0226 13:13:12.957779    4522 log.go:172] (0xc00087a210) (0xc0008823c0) Stream removed, broadcasting: 5\nI0226 13:13:12.957789    4522 log.go:172] (0xc00087a210) (0xc0005c0dc0) Stream removed, broadcasting: 7\n"
Feb 26 13:13:13.016: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:13:15.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-22spz" for this suite.
Feb 26 13:13:21.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:13:21.611: INFO: namespace: e2e-tests-kubectl-22spz, resource: bindings, ignored listing per whitelist
Feb 26 13:13:21.691: INFO: namespace e2e-tests-kubectl-22spz deletion completed in 6.651799673s

• [SLOW TEST:25.097 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:13:21.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 26 13:13:21.939: INFO: Creating ReplicaSet my-hostname-basic-c361579d-5899-11ea-8134-0242ac110008
Feb 26 13:13:22.160: INFO: Pod name my-hostname-basic-c361579d-5899-11ea-8134-0242ac110008: Found 0 pods out of 1
Feb 26 13:13:27.188: INFO: Pod name my-hostname-basic-c361579d-5899-11ea-8134-0242ac110008: Found 1 pods out of 1
Feb 26 13:13:27.188: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c361579d-5899-11ea-8134-0242ac110008" is running
Feb 26 13:13:35.206: INFO: Pod "my-hostname-basic-c361579d-5899-11ea-8134-0242ac110008-fsc9m" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-26 13:13:22 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-26 13:13:22 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c361579d-5899-11ea-8134-0242ac110008]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-26 13:13:22 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c361579d-5899-11ea-8134-0242ac110008]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-26 13:13:22 +0000 UTC Reason: Message:}])
Feb 26 13:13:35.206: INFO: Trying to dial the pod
Feb 26 13:13:40.282: INFO: Controller my-hostname-basic-c361579d-5899-11ea-8134-0242ac110008: Got expected result from replica 1 [my-hostname-basic-c361579d-5899-11ea-8134-0242ac110008-fsc9m]: "my-hostname-basic-c361579d-5899-11ea-8134-0242ac110008-fsc9m", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:13:40.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-m7kdz" for this suite.
Feb 26 13:13:48.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:13:48.406: INFO: namespace: e2e-tests-replicaset-m7kdz, resource: bindings, ignored listing per whitelist
Feb 26 13:13:48.491: INFO: namespace e2e-tests-replicaset-m7kdz deletion completed in 8.197624277s

• [SLOW TEST:26.800 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:13:48.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-d43056cc-5899-11ea-8134-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 26 13:13:50.439: INFO: Waiting up to 5m0s for pod "pod-secrets-d457b533-5899-11ea-8134-0242ac110008" in namespace "e2e-tests-secrets-8v4nn" to be "success or failure"
Feb 26 13:13:50.637: INFO: Pod "pod-secrets-d457b533-5899-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 198.373477ms
Feb 26 13:13:53.334: INFO: Pod "pod-secrets-d457b533-5899-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.894492545s
Feb 26 13:13:55.348: INFO: Pod "pod-secrets-d457b533-5899-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.908977303s
Feb 26 13:13:57.800: INFO: Pod "pod-secrets-d457b533-5899-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.360616724s
Feb 26 13:13:59.814: INFO: Pod "pod-secrets-d457b533-5899-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.374565795s
Feb 26 13:14:01.960: INFO: Pod "pod-secrets-d457b533-5899-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.520563406s
Feb 26 13:14:03.977: INFO: Pod "pod-secrets-d457b533-5899-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.537649408s
STEP: Saw pod success
Feb 26 13:14:03.977: INFO: Pod "pod-secrets-d457b533-5899-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 13:14:03.984: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d457b533-5899-11ea-8134-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 26 13:14:04.181: INFO: Waiting for pod pod-secrets-d457b533-5899-11ea-8134-0242ac110008 to disappear
Feb 26 13:14:04.315: INFO: Pod pod-secrets-d457b533-5899-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:14:04.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-8v4nn" for this suite.
Feb 26 13:14:12.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:14:12.472: INFO: namespace: e2e-tests-secrets-8v4nn, resource: bindings, ignored listing per whitelist
Feb 26 13:14:12.678: INFO: namespace e2e-tests-secrets-8v4nn deletion completed in 8.347451179s

• [SLOW TEST:24.187 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:14:12.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-e1f3a51e-5899-11ea-8134-0242ac110008
Feb 26 13:14:13.288: INFO: Pod name my-hostname-basic-e1f3a51e-5899-11ea-8134-0242ac110008: Found 0 pods out of 1
Feb 26 13:14:18.929: INFO: Pod name my-hostname-basic-e1f3a51e-5899-11ea-8134-0242ac110008: Found 1 pods out of 1
Feb 26 13:14:18.930: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e1f3a51e-5899-11ea-8134-0242ac110008" are running
Feb 26 13:14:23.507: INFO: Pod "my-hostname-basic-e1f3a51e-5899-11ea-8134-0242ac110008-nqf9v" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-26 13:14:13 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-26 13:14:13 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e1f3a51e-5899-11ea-8134-0242ac110008]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-26 13:14:13 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e1f3a51e-5899-11ea-8134-0242ac110008]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-26 13:14:13 +0000 UTC Reason: Message:}])
Feb 26 13:14:23.508: INFO: Trying to dial the pod
Feb 26 13:14:29.570: INFO: Controller my-hostname-basic-e1f3a51e-5899-11ea-8134-0242ac110008: Got expected result from replica 1 [my-hostname-basic-e1f3a51e-5899-11ea-8134-0242ac110008-nqf9v]: "my-hostname-basic-e1f3a51e-5899-11ea-8134-0242ac110008-nqf9v", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:14:29.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-xl2sc" for this suite.
Feb 26 13:14:35.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:14:36.002: INFO: namespace: e2e-tests-replication-controller-xl2sc, resource: bindings, ignored listing per whitelist
Feb 26 13:14:36.012: INFO: namespace e2e-tests-replication-controller-xl2sc deletion completed in 6.434620697s

• [SLOW TEST:23.332 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:14:36.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-86cxz
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-86cxz to expose endpoints map[]
Feb 26 13:14:36.290: INFO: Get endpoints failed (12.645592ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb 26 13:14:37.999: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-86cxz exposes endpoints map[] (1.720801814s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-86cxz
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-86cxz to expose endpoints map[pod1:[100]]
Feb 26 13:14:43.058: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.750173738s elapsed, will retry)
Feb 26 13:14:48.408: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-86cxz exposes endpoints map[pod1:[100]] (10.09978901s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-86cxz
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-86cxz to expose endpoints map[pod1:[100] pod2:[101]]
Feb 26 13:14:53.085: INFO: Unexpected endpoints: found map[f0e44da2-5899-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.666328606s elapsed, will retry)
Feb 26 13:15:00.145: INFO: Unexpected endpoints: found map[f0e44da2-5899-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (11.726367133s elapsed, will retry)
Feb 26 13:15:01.208: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-86cxz exposes endpoints map[pod1:[100] pod2:[101]] (12.789214769s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-86cxz
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-86cxz to expose endpoints map[pod2:[101]]
Feb 26 13:15:02.941: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-86cxz exposes endpoints map[pod2:[101]] (1.723503072s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-86cxz
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-86cxz to expose endpoints map[]
Feb 26 13:15:03.015: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-86cxz exposes endpoints map[] (38.620605ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:15:03.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-86cxz" for this suite.
Feb 26 13:15:27.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:15:27.512: INFO: namespace: e2e-tests-services-86cxz, resource: bindings, ignored listing per whitelist
Feb 26 13:15:27.805: INFO: namespace e2e-tests-services-86cxz deletion completed in 24.483939616s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:51.793 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:15:27.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Feb 26 13:15:28.955: INFO: created pod pod-service-account-defaultsa
Feb 26 13:15:28.955: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 26 13:15:29.090: INFO: created pod pod-service-account-mountsa
Feb 26 13:15:29.090: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 26 13:15:29.117: INFO: created pod pod-service-account-nomountsa
Feb 26 13:15:29.117: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 26 13:15:29.286: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 26 13:15:29.287: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 26 13:15:29.365: INFO: created pod pod-service-account-mountsa-mountspec
Feb 26 13:15:29.365: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 26 13:15:29.506: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 26 13:15:29.506: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 26 13:15:29.574: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 26 13:15:29.574: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 26 13:15:29.602: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 26 13:15:29.602: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 26 13:15:29.744: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 26 13:15:29.744: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:15:29.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-xdwrl" for this suite.
Feb 26 13:16:00.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:16:00.896: INFO: namespace: e2e-tests-svcaccounts-xdwrl, resource: bindings, ignored listing per whitelist
Feb 26 13:16:00.944: INFO: namespace e2e-tests-svcaccounts-xdwrl deletion completed in 31.145813408s

• [SLOW TEST:33.137 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:16:00.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 26 13:16:01.444: INFO: Waiting up to 5m0s for pod "pod-226ed83c-589a-11ea-8134-0242ac110008" in namespace "e2e-tests-emptydir-6287n" to be "success or failure"
Feb 26 13:16:01.608: INFO: Pod "pod-226ed83c-589a-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 163.247354ms
Feb 26 13:16:04.009: INFO: Pod "pod-226ed83c-589a-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.564481922s
Feb 26 13:16:06.024: INFO: Pod "pod-226ed83c-589a-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.579304852s
Feb 26 13:16:08.065: INFO: Pod "pod-226ed83c-589a-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.621038106s
Feb 26 13:16:10.084: INFO: Pod "pod-226ed83c-589a-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.63917144s
Feb 26 13:16:12.115: INFO: Pod "pod-226ed83c-589a-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.67102283s
STEP: Saw pod success
Feb 26 13:16:12.116: INFO: Pod "pod-226ed83c-589a-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 13:16:12.153: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-226ed83c-589a-11ea-8134-0242ac110008 container test-container: 
STEP: delete the pod
Feb 26 13:16:12.349: INFO: Waiting for pod pod-226ed83c-589a-11ea-8134-0242ac110008 to disappear
Feb 26 13:16:12.366: INFO: Pod pod-226ed83c-589a-11ea-8134-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:16:12.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6287n" for this suite.
Feb 26 13:16:18.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:16:18.581: INFO: namespace: e2e-tests-emptydir-6287n, resource: bindings, ignored listing per whitelist
Feb 26 13:16:18.733: INFO: namespace e2e-tests-emptydir-6287n deletion completed in 6.341094662s

• [SLOW TEST:17.789 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:16:18.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 26 13:16:32.028: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:16:33.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-dmrbw" for this suite.
Feb 26 13:16:55.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:16:55.948: INFO: namespace: e2e-tests-replicaset-dmrbw, resource: bindings, ignored listing per whitelist
Feb 26 13:16:56.044: INFO: namespace e2e-tests-replicaset-dmrbw deletion completed in 22.971997411s

• [SLOW TEST:37.310 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:16:56.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Feb 26 13:16:56.315: INFO: Waiting up to 5m0s for pod "client-containers-4315fa7f-589a-11ea-8134-0242ac110008" in namespace "e2e-tests-containers-959fm" to be "success or failure"
Feb 26 13:16:56.341: INFO: Pod "client-containers-4315fa7f-589a-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 25.230588ms
Feb 26 13:16:58.507: INFO: Pod "client-containers-4315fa7f-589a-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191249078s
Feb 26 13:17:00.524: INFO: Pod "client-containers-4315fa7f-589a-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208560977s
Feb 26 13:17:02.552: INFO: Pod "client-containers-4315fa7f-589a-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.23665431s
Feb 26 13:17:05.785: INFO: Pod "client-containers-4315fa7f-589a-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.469135645s
Feb 26 13:17:10.397: INFO: Pod "client-containers-4315fa7f-589a-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.081596573s
Feb 26 13:17:12.427: INFO: Pod "client-containers-4315fa7f-589a-11ea-8134-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.111107696s
Feb 26 13:17:14.476: INFO: Pod "client-containers-4315fa7f-589a-11ea-8134-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.160020824s
STEP: Saw pod success
Feb 26 13:17:14.476: INFO: Pod "client-containers-4315fa7f-589a-11ea-8134-0242ac110008" satisfied condition "success or failure"
Feb 26 13:17:14.493: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-4315fa7f-589a-11ea-8134-0242ac110008 container test-container: 
STEP: delete the pod
Feb 26 13:17:15.053: INFO: Waiting for pod client-containers-4315fa7f-589a-11ea-8134-0242ac110008 to disappear
Feb 26 13:17:15.141: INFO: Pod client-containers-4315fa7f-589a-11ea-8134-0242ac110008 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:17:15.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-959fm" for this suite.
Feb 26 13:17:23.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:17:23.279: INFO: namespace: e2e-tests-containers-959fm, resource: bindings, ignored listing per whitelist
Feb 26 13:17:23.529: INFO: namespace e2e-tests-containers-959fm deletion completed in 8.375809462s

• [SLOW TEST:27.484 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:17:23.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Feb 26 13:17:34.331: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:17:59.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-fblpm" for this suite.
Feb 26 13:18:05.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:18:05.997: INFO: namespace: e2e-tests-namespaces-fblpm, resource: bindings, ignored listing per whitelist
Feb 26 13:18:06.157: INFO: namespace e2e-tests-namespaces-fblpm deletion completed in 6.289211886s
STEP: Destroying namespace "e2e-tests-nsdeletetest-mln84" for this suite.
Feb 26 13:18:06.161: INFO: Namespace e2e-tests-nsdeletetest-mln84 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-4f7gq" for this suite.
Feb 26 13:18:12.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:18:13.777: INFO: namespace: e2e-tests-nsdeletetest-4f7gq, resource: bindings, ignored listing per whitelist
Feb 26 13:18:13.804: INFO: namespace e2e-tests-nsdeletetest-4f7gq deletion completed in 7.642783688s

• [SLOW TEST:50.274 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 26 13:18:13.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-5t2t
STEP: Creating a pod to test atomic-volume-subpath
Feb 26 13:18:14.162: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5t2t" in namespace "e2e-tests-subpath-2kp6v" to be "success or failure"
Feb 26 13:18:14.230: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Pending", Reason="", readiness=false. Elapsed: 68.069101ms
Feb 26 13:18:16.585: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.423504378s
Feb 26 13:18:18.636: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.474192228s
Feb 26 13:18:20.722: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.560683617s
Feb 26 13:18:24.229: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Pending", Reason="", readiness=false. Elapsed: 10.067253265s
Feb 26 13:18:26.258: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Pending", Reason="", readiness=false. Elapsed: 12.095876591s
Feb 26 13:18:28.282: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Pending", Reason="", readiness=false. Elapsed: 14.119952672s
Feb 26 13:18:30.295: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Pending", Reason="", readiness=false. Elapsed: 16.133633991s
Feb 26 13:18:32.612: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Pending", Reason="", readiness=false. Elapsed: 18.450665611s
Feb 26 13:18:34.634: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Pending", Reason="", readiness=false. Elapsed: 20.471820651s
Feb 26 13:18:36.649: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Running", Reason="", readiness=false. Elapsed: 22.487226538s
Feb 26 13:18:38.682: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Running", Reason="", readiness=false. Elapsed: 24.520674277s
Feb 26 13:18:40.704: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Running", Reason="", readiness=false. Elapsed: 26.541750293s
Feb 26 13:18:42.720: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Running", Reason="", readiness=false. Elapsed: 28.558690265s
Feb 26 13:18:44.737: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Running", Reason="", readiness=false. Elapsed: 30.575570915s
Feb 26 13:18:46.773: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Running", Reason="", readiness=false. Elapsed: 32.610989003s
Feb 26 13:18:48.788: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Running", Reason="", readiness=false. Elapsed: 34.62606584s
Feb 26 13:18:50.806: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Running", Reason="", readiness=false. Elapsed: 36.644085589s
Feb 26 13:18:52.827: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Running", Reason="", readiness=false. Elapsed: 38.665072735s
Feb 26 13:18:54.872: INFO: Pod "pod-subpath-test-downwardapi-5t2t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.709749657s
STEP: Saw pod success
Feb 26 13:18:54.872: INFO: Pod "pod-subpath-test-downwardapi-5t2t" satisfied condition "success or failure"
Feb 26 13:18:54.891: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-5t2t container test-container-subpath-downwardapi-5t2t: 
STEP: delete the pod
Feb 26 13:18:55.125: INFO: Waiting for pod pod-subpath-test-downwardapi-5t2t to disappear
Feb 26 13:18:55.144: INFO: Pod pod-subpath-test-downwardapi-5t2t no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-5t2t
Feb 26 13:18:55.144: INFO: Deleting pod "pod-subpath-test-downwardapi-5t2t" in namespace "e2e-tests-subpath-2kp6v"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 26 13:18:55.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-2kp6v" for this suite.
Feb 26 13:19:02.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 26 13:19:02.267: INFO: namespace: e2e-tests-subpath-2kp6v, resource: bindings, ignored listing per whitelist
Feb 26 13:19:02.581: INFO: namespace e2e-tests-subpath-2kp6v deletion completed in 7.41914877s

• [SLOW TEST:48.776 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSFeb 26 13:19:02.582: INFO: Running AfterSuite actions on all nodes
Feb 26 13:19:02.582: INFO: Running AfterSuite actions on node 1
Feb 26 13:19:02.582: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook [It] should execute poststart exec hook properly [NodeConformance] [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:175

Ran 199 of 2164 Specs in 9106.272 seconds
FAIL! -- 198 Passed | 1 Failed | 0 Pending | 1965 Skipped --- FAIL: TestE2E (9106.82s)
FAIL