I0212 10:47:14.039831 8 e2e.go:224] Starting e2e run "070a8837-4d85-11ea-b4b9-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581504433 - Will randomize all specs Will run 201 of 2164 specs Feb 12 10:47:14.369: INFO: >>> kubeConfig: /root/.kube/config Feb 12 10:47:14.372: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 12 10:47:14.389: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 12 10:47:14.416: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 12 10:47:14.416: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 12 10:47:14.416: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 12 10:47:14.424: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 12 10:47:14.424: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 12 10:47:14.424: INFO: e2e test version: v1.13.12 Feb 12 10:47:14.426: INFO: kube-apiserver version: v1.13.8 SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 10:47:14.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Feb 12 10:47:14.635: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 12 10:47:25.272: INFO: Successfully updated pod "labelsupdate07e12c6a-4d85-11ea-b4b9-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 10:47:29.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-sqzcg" for this suite. Feb 12 10:47:53.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 10:47:53.660: INFO: namespace: e2e-tests-downward-api-sqzcg, resource: bindings, ignored listing per whitelist Feb 12 10:47:53.697: INFO: namespace e2e-tests-downward-api-sqzcg deletion completed in 24.23095357s • [SLOW TEST:39.271 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 10:47:53.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 12 10:47:54.110: INFO: Waiting up to 5m0s for pod "pod-1f5339f7-4d85-11ea-b4b9-0242ac110005" in namespace "e2e-tests-emptydir-nf5jw" to be "success or failure" Feb 12 10:47:54.154: INFO: Pod "pod-1f5339f7-4d85-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 43.406493ms Feb 12 10:47:56.184: INFO: Pod "pod-1f5339f7-4d85-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073234393s Feb 12 10:47:58.203: INFO: Pod "pod-1f5339f7-4d85-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092452684s Feb 12 10:48:00.223: INFO: Pod "pod-1f5339f7-4d85-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112865991s Feb 12 10:48:02.254: INFO: Pod "pod-1f5339f7-4d85-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.143288625s Feb 12 10:48:04.278: INFO: Pod "pod-1f5339f7-4d85-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.167374448s STEP: Saw pod success Feb 12 10:48:04.278: INFO: Pod "pod-1f5339f7-4d85-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 10:48:04.285: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1f5339f7-4d85-11ea-b4b9-0242ac110005 container test-container: STEP: delete the pod Feb 12 10:48:04.477: INFO: Waiting for pod pod-1f5339f7-4d85-11ea-b4b9-0242ac110005 to disappear Feb 12 10:48:04.496: INFO: Pod pod-1f5339f7-4d85-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 10:48:04.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-nf5jw" for this suite. Feb 12 10:48:10.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 10:48:11.039: INFO: namespace: e2e-tests-emptydir-nf5jw, resource: bindings, ignored listing per whitelist Feb 12 10:48:11.145: INFO: namespace e2e-tests-emptydir-nf5jw deletion completed in 6.612540498s • [SLOW TEST:17.448 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 10:48:11.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-2a20723a-4d85-11ea-b4b9-0242ac110005 STEP: Creating a pod to test consume secrets Feb 12 10:48:12.158: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2a24daf9-4d85-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-nwm6v" to be "success or failure" Feb 12 10:48:12.286: INFO: Pod "pod-projected-secrets-2a24daf9-4d85-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 127.116406ms Feb 12 10:48:14.479: INFO: Pod "pod-projected-secrets-2a24daf9-4d85-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320633551s Feb 12 10:48:16.500: INFO: Pod "pod-projected-secrets-2a24daf9-4d85-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.341249486s Feb 12 10:48:18.546: INFO: Pod "pod-projected-secrets-2a24daf9-4d85-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.387805078s Feb 12 10:48:20.583: INFO: Pod "pod-projected-secrets-2a24daf9-4d85-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.425050613s Feb 12 10:48:22.602: INFO: Pod "pod-projected-secrets-2a24daf9-4d85-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.443092572s STEP: Saw pod success Feb 12 10:48:22.602: INFO: Pod "pod-projected-secrets-2a24daf9-4d85-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 10:48:22.607: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-2a24daf9-4d85-11ea-b4b9-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 12 10:48:22.683: INFO: Waiting for pod pod-projected-secrets-2a24daf9-4d85-11ea-b4b9-0242ac110005 to disappear Feb 12 10:48:23.901: INFO: Pod pod-projected-secrets-2a24daf9-4d85-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 10:48:23.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nwm6v" for this suite. Feb 12 10:48:30.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 10:48:31.008: INFO: namespace: e2e-tests-projected-nwm6v, resource: bindings, ignored listing per whitelist Feb 12 10:48:31.058: INFO: namespace e2e-tests-projected-nwm6v deletion completed in 6.347012623s • [SLOW TEST:19.912 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 10:48:31.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 10:48:43.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-jhft6" for this suite. Feb 12 10:49:27.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 10:49:27.651: INFO: namespace: e2e-tests-kubelet-test-jhft6, resource: bindings, ignored listing per whitelist Feb 12 10:49:27.668: INFO: namespace e2e-tests-kubelet-test-jhft6 deletion completed in 44.23222049s • [SLOW TEST:56.610 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 10:49:27.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 12 10:49:27.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-kg28z' Feb 12 10:49:29.951: INFO: stderr: "" Feb 12 10:49:29.951: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Feb 12 10:49:29.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-kg28z' Feb 12 10:49:32.567: INFO: stderr: "" Feb 12 10:49:32.567: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 10:49:32.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kg28z" for this suite. Feb 12 10:49:38.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 10:49:38.966: INFO: namespace: e2e-tests-kubectl-kg28z, resource: bindings, ignored listing per whitelist Feb 12 10:49:39.038: INFO: namespace e2e-tests-kubectl-kg28z deletion completed in 6.359462156s • [SLOW TEST:11.369 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 10:49:39.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-vhpz STEP: Creating a pod to test atomic-volume-subpath Feb 12 10:49:39.288: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-vhpz" in namespace "e2e-tests-subpath-k4m2v" to be "success or failure" Feb 12 10:49:39.375: INFO: Pod "pod-subpath-test-projected-vhpz": Phase="Pending", Reason="", readiness=false. Elapsed: 86.674373ms Feb 12 10:49:41.409: INFO: Pod "pod-subpath-test-projected-vhpz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120824224s Feb 12 10:49:43.425: INFO: Pod "pod-subpath-test-projected-vhpz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13707231s Feb 12 10:49:45.964: INFO: Pod "pod-subpath-test-projected-vhpz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.675730728s Feb 12 10:49:47.983: INFO: Pod "pod-subpath-test-projected-vhpz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.694995921s Feb 12 10:49:50.021: INFO: Pod "pod-subpath-test-projected-vhpz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.732858845s Feb 12 10:49:52.054: INFO: Pod "pod-subpath-test-projected-vhpz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.766468509s Feb 12 10:49:54.070: INFO: Pod "pod-subpath-test-projected-vhpz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.782141202s Feb 12 10:49:56.084: INFO: Pod "pod-subpath-test-projected-vhpz": Phase="Running", Reason="", readiness=false. Elapsed: 16.796227781s Feb 12 10:49:58.098: INFO: Pod "pod-subpath-test-projected-vhpz": Phase="Running", Reason="", readiness=false. Elapsed: 18.809747198s Feb 12 10:50:00.115: INFO: Pod "pod-subpath-test-projected-vhpz": Phase="Running", Reason="", readiness=false. Elapsed: 20.827576487s Feb 12 10:50:02.135: INFO: Pod "pod-subpath-test-projected-vhpz": Phase="Running", Reason="", readiness=false. Elapsed: 22.846842859s Feb 12 10:50:04.146: INFO: Pod "pod-subpath-test-projected-vhpz": Phase="Running", Reason="", readiness=false. Elapsed: 24.857921301s Feb 12 10:50:06.182: INFO: Pod "pod-subpath-test-projected-vhpz": Phase="Running", Reason="", readiness=false. Elapsed: 26.894078053s Feb 12 10:50:08.197: INFO: Pod "pod-subpath-test-projected-vhpz": Phase="Running", Reason="", readiness=false. Elapsed: 28.908939739s Feb 12 10:50:10.221: INFO: Pod "pod-subpath-test-projected-vhpz": Phase="Running", Reason="", readiness=false. Elapsed: 30.933139227s Feb 12 10:50:12.239: INFO: Pod "pod-subpath-test-projected-vhpz": Phase="Running", Reason="", readiness=false. Elapsed: 32.95146756s Feb 12 10:50:14.253: INFO: Pod "pod-subpath-test-projected-vhpz": Phase="Running", Reason="", readiness=false. Elapsed: 34.96464986s Feb 12 10:50:16.277: INFO: Pod "pod-subpath-test-projected-vhpz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.989281047s STEP: Saw pod success Feb 12 10:50:16.277: INFO: Pod "pod-subpath-test-projected-vhpz" satisfied condition "success or failure" Feb 12 10:50:16.284: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-vhpz container test-container-subpath-projected-vhpz: STEP: delete the pod Feb 12 10:50:16.695: INFO: Waiting for pod pod-subpath-test-projected-vhpz to disappear Feb 12 10:50:16.895: INFO: Pod pod-subpath-test-projected-vhpz no longer exists STEP: Deleting pod pod-subpath-test-projected-vhpz Feb 12 10:50:16.895: INFO: Deleting pod "pod-subpath-test-projected-vhpz" in namespace "e2e-tests-subpath-k4m2v" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 10:50:16.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-k4m2v" for this suite. Feb 12 10:50:22.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 10:50:23.062: INFO: namespace: e2e-tests-subpath-k4m2v, resource: bindings, ignored listing per whitelist Feb 12 10:50:23.087: INFO: namespace e2e-tests-subpath-k4m2v deletion completed in 6.177260579s • [SLOW TEST:44.049 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 10:50:23.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-wsb2n Feb 12 10:50:35.328: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-wsb2n STEP: checking the pod's current state and verifying that restartCount is present Feb 12 10:50:35.334: INFO: Initial restart count of pod liveness-http is 0 Feb 12 10:50:49.763: INFO: Restart count of pod e2e-tests-container-probe-wsb2n/liveness-http is now 1 (14.428914807s elapsed) Feb 12 10:51:07.947: INFO: Restart count of pod e2e-tests-container-probe-wsb2n/liveness-http is now 2 (32.612806967s elapsed) Feb 12 10:51:28.455: INFO: Restart count of pod e2e-tests-container-probe-wsb2n/liveness-http is now 3 (53.121279943s elapsed) Feb 12 10:51:48.848: INFO: Restart count of pod e2e-tests-container-probe-wsb2n/liveness-http is now 4 (1m13.513634382s elapsed) Feb 12 10:52:51.486: INFO: Restart count of pod e2e-tests-container-probe-wsb2n/liveness-http is now 5 (2m16.152222376s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 10:52:51.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-wsb2n" for this suite. Feb 12 10:52:57.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 10:52:57.826: INFO: namespace: e2e-tests-container-probe-wsb2n, resource: bindings, ignored listing per whitelist Feb 12 10:52:57.888: INFO: namespace e2e-tests-container-probe-wsb2n deletion completed in 6.31247961s • [SLOW TEST:154.800 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 10:52:57.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-d4aae25d-4d85-11ea-b4b9-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 12 10:52:58.252: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d4acbc7e-4d85-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-zbwwp" to be "success or failure" Feb 12 10:52:58.369: INFO: Pod "pod-projected-configmaps-d4acbc7e-4d85-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 116.872187ms Feb 12 10:53:00.383: INFO: Pod "pod-projected-configmaps-d4acbc7e-4d85-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130612774s Feb 12 10:53:02.420: INFO: Pod "pod-projected-configmaps-d4acbc7e-4d85-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168024036s Feb 12 10:53:05.353: INFO: Pod "pod-projected-configmaps-d4acbc7e-4d85-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.100884688s Feb 12 10:53:07.369: INFO: Pod "pod-projected-configmaps-d4acbc7e-4d85-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.116580129s Feb 12 10:53:09.384: INFO: Pod "pod-projected-configmaps-d4acbc7e-4d85-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.131802504s STEP: Saw pod success Feb 12 10:53:09.384: INFO: Pod "pod-projected-configmaps-d4acbc7e-4d85-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 10:53:09.391: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d4acbc7e-4d85-11ea-b4b9-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 12 10:53:09.663: INFO: Waiting for pod pod-projected-configmaps-d4acbc7e-4d85-11ea-b4b9-0242ac110005 to disappear Feb 12 10:53:09.682: INFO: Pod pod-projected-configmaps-d4acbc7e-4d85-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 10:53:09.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zbwwp" for this suite. Feb 12 10:53:15.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 10:53:16.000: INFO: namespace: e2e-tests-projected-zbwwp, resource: bindings, ignored listing per whitelist Feb 12 10:53:16.048: INFO: namespace e2e-tests-projected-zbwwp deletion completed in 6.327158199s • [SLOW TEST:18.159 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 10:53:16.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Feb 12 10:53:28.334: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-df6e5a81-4d85-11ea-b4b9-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-q9lgm", SelfLink:"/api/v1/namespaces/e2e-tests-pods-q9lgm/pods/pod-submit-remove-df6e5a81-4d85-11ea-b4b9-0242ac110005", UID:"df719261-4d85-11ea-a994-fa163e34d433", ResourceVersion:"21409018", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717101596, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"276194733"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-psdn9", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001b0e000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-psdn9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001a61ec8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001d1dec0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001a61f00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001a61f20)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001a61f28), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001a61f2c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717101596, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717101606, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717101606, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717101596, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000bdffa0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000bdffc0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://e5718ec1cc86bd65920e238ffccb87aafe726ca3298a7124386e0e3da66b30b2"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 10:53:35.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-q9lgm" for this suite. Feb 12 10:53:41.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 10:53:41.399: INFO: namespace: e2e-tests-pods-q9lgm, resource: bindings, ignored listing per whitelist Feb 12 10:53:41.465: INFO: namespace e2e-tests-pods-q9lgm deletion completed in 6.262398856s • [SLOW TEST:25.417 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 10:53:41.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 12 10:53:41.698: INFO: Number of nodes with available pods: 0 Feb 12 10:53:41.698: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:53:42.743: INFO: Number of nodes with available pods: 0 Feb 12 10:53:42.743: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:53:43.770: INFO: Number of nodes with available pods: 0 Feb 12 10:53:43.770: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:53:44.719: INFO: Number of nodes with available pods: 0 Feb 12 10:53:44.719: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:53:45.714: INFO: Number of nodes with available pods: 0 Feb 12 10:53:45.714: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:53:47.072: INFO: Number of nodes with available pods: 0 Feb 12 10:53:47.072: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:53:48.441: INFO: Number of nodes with available pods: 0 Feb 12 10:53:48.441: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:53:49.180: INFO: Number of nodes with available pods: 0 Feb 12 10:53:49.180: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:53:49.736: INFO: Number of nodes with available pods: 0 Feb 12 10:53:49.736: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:53:50.728: INFO: Number of nodes with available pods: 0 Feb 12 10:53:50.728: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:53:51.765: INFO: Number of nodes with available pods: 1 Feb 12 10:53:51.765: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Feb 12 10:53:51.862: INFO: Number of nodes with available pods: 0 Feb 12 10:53:51.862: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:53:52.974: INFO: Number of nodes with available pods: 0 Feb 12 10:53:52.974: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:53:53.883: INFO: Number of nodes with available pods: 0 Feb 12 10:53:53.883: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:53:54.963: INFO: Number of nodes with available pods: 0 Feb 12 10:53:54.963: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:53:55.896: INFO: Number of nodes with available pods: 0 Feb 12 10:53:55.896: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:53:56.925: INFO: Number of nodes with available pods: 0 Feb 12 10:53:56.925: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:53:57.896: INFO: Number of nodes with available pods: 0 Feb 12 10:53:57.896: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:53:59.042: INFO: Number of nodes with available pods: 0 Feb 12 10:53:59.042: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:53:59.895: INFO: Number of nodes with available pods: 0 Feb 12 10:53:59.895: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:54:00.890: INFO: Number of nodes with available pods: 0 Feb 12 10:54:00.890: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:54:01.901: INFO: Number of nodes with available pods: 0 Feb 12 10:54:01.901: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:54:04.458: INFO: Number of nodes with available pods: 0 Feb 12 10:54:04.458: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:54:04.901: INFO: Number of nodes with available pods: 0 Feb 12 10:54:04.901: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:54:05.957: INFO: Number of nodes with available pods: 0 Feb 12 10:54:05.957: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:54:06.907: INFO: Number of nodes with available pods: 0 Feb 12 10:54:06.907: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 10:54:07.890: INFO: Number of nodes with available pods: 1 Feb 12 10:54:07.890: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-mr7cx, will wait for the garbage collector to delete the pods Feb 12 10:54:07.974: INFO: Deleting DaemonSet.extensions daemon-set took: 18.713359ms Feb 12 10:54:08.074: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.443286ms Feb 12 10:54:16.112: INFO: Number of nodes with available pods: 0 Feb 12 10:54:16.112: INFO: Number of running nodes: 0, number of available pods: 0 Feb 12 10:54:16.213: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-mr7cx/daemonsets","resourceVersion":"21409128"},"items":null} Feb 12 10:54:16.236: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-mr7cx/pods","resourceVersion":"21409128"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 10:54:16.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-mr7cx" for this suite. Feb 12 10:54:22.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 10:54:22.418: INFO: namespace: e2e-tests-daemonsets-mr7cx, resource: bindings, ignored listing per whitelist Feb 12 10:54:22.586: INFO: namespace e2e-tests-daemonsets-mr7cx deletion completed in 6.26660603s • [SLOW TEST:41.121 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 10:54:22.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 12 10:54:24.106: INFO: Pod name wrapped-volume-race-07add0be-4d86-11ea-b4b9-0242ac110005: Found 0 pods out of 5 Feb 12 10:54:29.136: INFO: Pod name wrapped-volume-race-07add0be-4d86-11ea-b4b9-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-07add0be-4d86-11ea-b4b9-0242ac110005 in namespace e2e-tests-emptydir-wrapper-jjsqt, will wait for the garbage collector to delete the pods Feb 12 10:56:29.425: INFO: Deleting ReplicationController wrapped-volume-race-07add0be-4d86-11ea-b4b9-0242ac110005 took: 26.46694ms Feb 12 10:56:29.726: INFO: Terminating ReplicationController wrapped-volume-race-07add0be-4d86-11ea-b4b9-0242ac110005 pods took: 300.859931ms STEP: Creating RC which spawns configmap-volume pods Feb 12 10:57:22.941: INFO: Pod name wrapped-volume-race-7253ca5d-4d86-11ea-b4b9-0242ac110005: Found 0 pods out of 5 Feb 12 10:57:27.966: INFO: Pod name wrapped-volume-race-7253ca5d-4d86-11ea-b4b9-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7253ca5d-4d86-11ea-b4b9-0242ac110005 in namespace e2e-tests-emptydir-wrapper-jjsqt, will wait for the garbage collector to delete the pods Feb 12 10:59:10.246: INFO: Deleting ReplicationController wrapped-volume-race-7253ca5d-4d86-11ea-b4b9-0242ac110005 took: 21.565493ms Feb 12 10:59:10.747: INFO: Terminating ReplicationController wrapped-volume-race-7253ca5d-4d86-11ea-b4b9-0242ac110005 pods took: 500.593017ms STEP: Creating RC which spawns configmap-volume pods Feb 12 10:59:56.158: INFO: Pod name wrapped-volume-race-cd981fac-4d86-11ea-b4b9-0242ac110005: Found 0 pods out of 5 Feb 12 11:00:01.182: INFO: Pod name wrapped-volume-race-cd981fac-4d86-11ea-b4b9-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-cd981fac-4d86-11ea-b4b9-0242ac110005 in namespace e2e-tests-emptydir-wrapper-jjsqt, will wait for the garbage collector to delete the pods Feb 12 11:01:47.335: INFO: Deleting ReplicationController wrapped-volume-race-cd981fac-4d86-11ea-b4b9-0242ac110005 took: 30.425028ms Feb 12 11:01:47.836: INFO: Terminating ReplicationController wrapped-volume-race-cd981fac-4d86-11ea-b4b9-0242ac110005 pods took: 501.011339ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:02:44.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-jjsqt" for this suite. Feb 12 11:02:54.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:02:54.474: INFO: namespace: e2e-tests-emptydir-wrapper-jjsqt, resource: bindings, ignored listing per whitelist Feb 12 11:02:54.509: INFO: namespace e2e-tests-emptydir-wrapper-jjsqt deletion completed in 10.301178027s • [SLOW TEST:511.922 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:02:54.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-3841c37a-4d87-11ea-b4b9-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 12 11:02:54.846: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3844dd65-4d87-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-5vfgc" to be "success or failure" Feb 12 11:02:55.127: INFO: Pod "pod-projected-configmaps-3844dd65-4d87-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 280.274116ms Feb 12 11:02:58.496: INFO: Pod "pod-projected-configmaps-3844dd65-4d87-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.650069422s Feb 12 11:03:01.963: INFO: Pod "pod-projected-configmaps-3844dd65-4d87-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.116427719s Feb 12 11:03:04.054: INFO: Pod "pod-projected-configmaps-3844dd65-4d87-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.207956826s Feb 12 11:03:06.232: INFO: Pod "pod-projected-configmaps-3844dd65-4d87-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.385317276s Feb 12 11:03:08.251: INFO: Pod "pod-projected-configmaps-3844dd65-4d87-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.404560218s Feb 12 11:03:10.284: INFO: Pod "pod-projected-configmaps-3844dd65-4d87-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.437604922s STEP: Saw pod success Feb 12 11:03:10.284: INFO: Pod "pod-projected-configmaps-3844dd65-4d87-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:03:10.293: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-3844dd65-4d87-11ea-b4b9-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 12 11:03:10.468: INFO: Waiting for pod pod-projected-configmaps-3844dd65-4d87-11ea-b4b9-0242ac110005 to disappear Feb 12 11:03:10.480: INFO: Pod pod-projected-configmaps-3844dd65-4d87-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:03:10.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5vfgc" for this suite. Feb 12 11:03:16.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:03:16.702: INFO: namespace: e2e-tests-projected-5vfgc, resource: bindings, ignored listing per whitelist Feb 12 11:03:16.721: INFO: namespace e2e-tests-projected-5vfgc deletion completed in 6.233158561s • [SLOW TEST:22.211 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:03:16.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 12 11:03:17.119: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-nbpkd,SelfLink:/api/v1/namespaces/e2e-tests-watch-nbpkd/configmaps/e2e-watch-test-resource-version,UID:45748cca-4d87-11ea-a994-fa163e34d433,ResourceVersion:21410228,Generation:0,CreationTimestamp:2020-02-12 11:03:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 12 11:03:17.120: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-nbpkd,SelfLink:/api/v1/namespaces/e2e-tests-watch-nbpkd/configmaps/e2e-watch-test-resource-version,UID:45748cca-4d87-11ea-a994-fa163e34d433,ResourceVersion:21410229,Generation:0,CreationTimestamp:2020-02-12 11:03:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:03:17.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-nbpkd" for this suite. Feb 12 11:03:25.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:03:25.363: INFO: namespace: e2e-tests-watch-nbpkd, resource: bindings, ignored listing per whitelist Feb 12 11:03:25.490: INFO: namespace e2e-tests-watch-nbpkd deletion completed in 8.346314836s • [SLOW TEST:8.770 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:03:25.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Feb 12 11:03:25.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nschk' Feb 12 11:03:28.123: INFO: stderr: "" Feb 12 11:03:28.123: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 12 11:03:28.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nschk' Feb 12 11:03:28.289: INFO: stderr: "" Feb 12 11:03:28.289: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Feb 12 11:03:33.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nschk' Feb 12 11:03:33.568: INFO: stderr: "" Feb 12 11:03:33.568: INFO: stdout: "update-demo-nautilus-4xqpx update-demo-nautilus-z8q8h " Feb 12 11:03:33.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4xqpx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nschk' Feb 12 11:03:33.720: INFO: stderr: "" Feb 12 11:03:33.720: INFO: stdout: "" Feb 12 11:03:33.720: INFO: update-demo-nautilus-4xqpx is created but not running Feb 12 11:03:38.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nschk' Feb 12 11:03:39.085: INFO: stderr: "" Feb 12 11:03:39.085: INFO: stdout: "update-demo-nautilus-4xqpx update-demo-nautilus-z8q8h " Feb 12 11:03:39.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4xqpx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nschk' Feb 12 11:03:39.257: INFO: stderr: "" Feb 12 11:03:39.258: INFO: stdout: "" Feb 12 11:03:39.258: INFO: update-demo-nautilus-4xqpx is created but not running Feb 12 11:03:44.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nschk' Feb 12 11:03:44.493: INFO: stderr: "" Feb 12 11:03:44.493: INFO: stdout: "update-demo-nautilus-4xqpx update-demo-nautilus-z8q8h " Feb 12 11:03:44.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4xqpx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nschk' Feb 12 11:03:44.656: INFO: stderr: "" Feb 12 11:03:44.656: INFO: stdout: "true" Feb 12 11:03:44.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4xqpx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nschk' Feb 12 11:03:44.833: INFO: stderr: "" Feb 12 11:03:44.833: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 12 11:03:44.833: INFO: validating pod update-demo-nautilus-4xqpx Feb 12 11:03:44.857: INFO: got data: { "image": "nautilus.jpg" } Feb 12 11:03:44.857: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 12 11:03:44.857: INFO: update-demo-nautilus-4xqpx is verified up and running Feb 12 11:03:44.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z8q8h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nschk' Feb 12 11:03:44.992: INFO: stderr: "" Feb 12 11:03:44.992: INFO: stdout: "true" Feb 12 11:03:44.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z8q8h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nschk' Feb 12 11:03:45.139: INFO: stderr: "" Feb 12 11:03:45.139: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 12 11:03:45.139: INFO: validating pod update-demo-nautilus-z8q8h Feb 12 11:03:45.149: INFO: got data: { "image": "nautilus.jpg" } Feb 12 11:03:45.149: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 12 11:03:45.149: INFO: update-demo-nautilus-z8q8h is verified up and running STEP: rolling-update to new replication controller Feb 12 11:03:45.152: INFO: scanned /root for discovery docs: Feb 12 11:03:45.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-nschk' Feb 12 11:04:26.861: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 12 11:04:26.861: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 12 11:04:26.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nschk' Feb 12 11:04:27.106: INFO: stderr: "" Feb 12 11:04:27.106: INFO: stdout: "update-demo-kitten-2t4cl update-demo-kitten-jmg6z update-demo-nautilus-4xqpx " STEP: Replicas for name=update-demo: expected=2 actual=3 Feb 12 11:04:32.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nschk' Feb 12 11:04:32.267: INFO: stderr: "" Feb 12 11:04:32.267: INFO: stdout: "update-demo-kitten-2t4cl update-demo-kitten-jmg6z update-demo-nautilus-4xqpx " STEP: Replicas for name=update-demo: expected=2 actual=3 Feb 12 11:04:37.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-nschk' Feb 12 11:04:37.450: INFO: stderr: "" Feb 12 11:04:37.450: INFO: stdout: "update-demo-kitten-2t4cl update-demo-kitten-jmg6z " Feb 12 11:04:37.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2t4cl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nschk' Feb 12 11:04:37.615: INFO: stderr: "" Feb 12 11:04:37.616: INFO: stdout: "true" Feb 12 11:04:37.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2t4cl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nschk' Feb 12 11:04:37.728: INFO: stderr: "" Feb 12 11:04:37.728: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 12 11:04:37.728: INFO: validating pod update-demo-kitten-2t4cl Feb 12 11:04:37.745: INFO: got data: { "image": "kitten.jpg" } Feb 12 11:04:37.746: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 12 11:04:37.746: INFO: update-demo-kitten-2t4cl is verified up and running Feb 12 11:04:37.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jmg6z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nschk' Feb 12 11:04:37.936: INFO: stderr: "" Feb 12 11:04:37.936: INFO: stdout: "true" Feb 12 11:04:37.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jmg6z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-nschk' Feb 12 11:04:38.118: INFO: stderr: "" Feb 12 11:04:38.119: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 12 11:04:38.119: INFO: validating pod update-demo-kitten-jmg6z Feb 12 11:04:38.201: INFO: got data: { "image": "kitten.jpg" } Feb 12 11:04:38.201: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 12 11:04:38.201: INFO: update-demo-kitten-jmg6z is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:04:38.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nschk" for this suite. Feb 12 11:05:18.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:05:18.367: INFO: namespace: e2e-tests-kubectl-nschk, resource: bindings, ignored listing per whitelist Feb 12 11:05:18.397: INFO: namespace e2e-tests-kubectl-nschk deletion completed in 40.186013009s • [SLOW TEST:112.906 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:05:18.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 12 11:05:18.840: INFO: Waiting up to 5m0s for pod "pod-8e09cbf4-4d87-11ea-b4b9-0242ac110005" in namespace "e2e-tests-emptydir-sqngs" to be "success or failure" Feb 12 11:05:18.856: INFO: Pod "pod-8e09cbf4-4d87-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.597816ms Feb 12 11:05:21.089: INFO: Pod "pod-8e09cbf4-4d87-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249223963s Feb 12 11:05:23.104: INFO: Pod "pod-8e09cbf4-4d87-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.263599844s Feb 12 11:05:25.728: INFO: Pod "pod-8e09cbf4-4d87-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.888045017s Feb 12 11:05:27.751: INFO: Pod "pod-8e09cbf4-4d87-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.91074962s Feb 12 11:05:29.771: INFO: Pod "pod-8e09cbf4-4d87-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.93070578s Feb 12 11:05:31.793: INFO: Pod "pod-8e09cbf4-4d87-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.95288237s STEP: Saw pod success Feb 12 11:05:31.793: INFO: Pod "pod-8e09cbf4-4d87-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:05:31.801: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-8e09cbf4-4d87-11ea-b4b9-0242ac110005 container test-container: STEP: delete the pod Feb 12 11:05:33.527: INFO: Waiting for pod pod-8e09cbf4-4d87-11ea-b4b9-0242ac110005 to disappear Feb 12 11:05:33.546: INFO: Pod pod-8e09cbf4-4d87-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:05:33.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sqngs" for this suite. Feb 12 11:05:39.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:05:39.763: INFO: namespace: e2e-tests-emptydir-sqngs, resource: bindings, ignored listing per whitelist Feb 12 11:05:39.870: INFO: namespace e2e-tests-emptydir-sqngs deletion completed in 6.310769219s • [SLOW TEST:21.473 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:05:39.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:05:40.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-n6gpp" for this suite. Feb 12 11:05:46.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:05:46.351: INFO: namespace: e2e-tests-kubelet-test-n6gpp, resource: bindings, ignored listing per whitelist Feb 12 11:05:46.363: INFO: namespace e2e-tests-kubelet-test-n6gpp deletion completed in 6.156363743s • [SLOW TEST:6.492 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:05:46.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 12 11:05:46.617: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:06:04.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-fglmj" for this suite. Feb 12 11:06:13.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:06:13.190: INFO: namespace: e2e-tests-init-container-fglmj, resource: bindings, ignored listing per whitelist Feb 12 11:06:13.230: INFO: namespace e2e-tests-init-container-fglmj deletion completed in 8.247453182s • [SLOW TEST:26.867 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:06:13.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-kdxd6 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 12 11:06:13.437: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 12 11:06:53.681: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-kdxd6 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 11:06:53.681: INFO: >>> kubeConfig: /root/.kube/config I0212 11:06:53.813791 8 log.go:172] (0xc00124e6e0) (0xc0024ee1e0) Create stream I0212 11:06:53.813984 8 log.go:172] (0xc00124e6e0) (0xc0024ee1e0) Stream added, broadcasting: 1 I0212 11:06:53.853946 8 log.go:172] (0xc00124e6e0) Reply frame received for 1 I0212 11:06:53.854158 8 log.go:172] (0xc00124e6e0) (0xc0011c8000) Create stream I0212 11:06:53.854193 8 log.go:172] (0xc00124e6e0) (0xc0011c8000) Stream added, broadcasting: 3 I0212 11:06:53.856421 8 log.go:172] (0xc00124e6e0) Reply frame received for 3 I0212 11:06:53.856469 8 log.go:172] (0xc00124e6e0) (0xc0025d6000) Create stream I0212 11:06:53.856484 8 log.go:172] (0xc00124e6e0) (0xc0025d6000) Stream added, broadcasting: 5 I0212 11:06:53.857615 8 log.go:172] (0xc00124e6e0) Reply frame received for 5 I0212 11:06:54.024417 8 log.go:172] (0xc00124e6e0) Data frame received for 3 I0212 11:06:54.024490 8 log.go:172] (0xc0011c8000) (3) Data frame handling I0212 11:06:54.024521 8 log.go:172] (0xc0011c8000) (3) Data frame sent I0212 11:06:54.335183 8 log.go:172] (0xc00124e6e0) (0xc0011c8000) Stream removed, broadcasting: 3 I0212 11:06:54.335424 8 log.go:172] (0xc00124e6e0) Data frame received for 1 I0212 11:06:54.335492 8 log.go:172] (0xc0024ee1e0) (1) Data frame handling I0212 11:06:54.335556 8 log.go:172] (0xc0024ee1e0) (1) Data frame sent I0212 11:06:54.335591 8 log.go:172] (0xc00124e6e0) (0xc0025d6000) Stream removed, broadcasting: 5 I0212 11:06:54.335753 8 log.go:172] (0xc00124e6e0) (0xc0024ee1e0) Stream removed, broadcasting: 1 I0212 11:06:54.335850 8 log.go:172] (0xc00124e6e0) Go away received I0212 11:06:54.336152 8 log.go:172] (0xc00124e6e0) (0xc0024ee1e0) Stream removed, broadcasting: 1 I0212 11:06:54.336180 8 log.go:172] (0xc00124e6e0) (0xc0011c8000) Stream removed, broadcasting: 3 I0212 11:06:54.336193 8 log.go:172] (0xc00124e6e0) (0xc0025d6000) Stream removed, broadcasting: 5 Feb 12 11:06:54.336: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:06:54.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-kdxd6" for this suite. Feb 12 11:07:18.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:07:18.455: INFO: namespace: e2e-tests-pod-network-test-kdxd6, resource: bindings, ignored listing per whitelist Feb 12 11:07:18.863: INFO: namespace e2e-tests-pod-network-test-kdxd6 deletion completed in 24.50624073s • [SLOW TEST:65.633 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:07:18.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 12 11:07:29.705: INFO: Successfully updated pod "labelsupdated5c54ba9-4d87-11ea-b4b9-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:07:31.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2bbff" for this suite. Feb 12 11:07:56.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:07:56.151: INFO: namespace: e2e-tests-projected-2bbff, resource: bindings, ignored listing per whitelist Feb 12 11:07:56.232: INFO: namespace e2e-tests-projected-2bbff deletion completed in 24.373352338s • [SLOW TEST:37.368 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:07:56.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-ec23b22d-4d87-11ea-b4b9-0242ac110005 Feb 12 11:07:56.620: INFO: Pod name my-hostname-basic-ec23b22d-4d87-11ea-b4b9-0242ac110005: Found 0 pods out of 1 Feb 12 11:08:01.641: INFO: Pod name my-hostname-basic-ec23b22d-4d87-11ea-b4b9-0242ac110005: Found 1 pods out of 1 Feb 12 11:08:01.641: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ec23b22d-4d87-11ea-b4b9-0242ac110005" are running Feb 12 11:08:09.663: INFO: Pod "my-hostname-basic-ec23b22d-4d87-11ea-b4b9-0242ac110005-6fq9d" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 11:07:56 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 11:07:56 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ec23b22d-4d87-11ea-b4b9-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 11:07:56 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ec23b22d-4d87-11ea-b4b9-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 11:07:56 +0000 UTC Reason: Message:}]) Feb 12 11:08:09.663: INFO: Trying to dial the pod Feb 12 11:08:14.751: INFO: Controller my-hostname-basic-ec23b22d-4d87-11ea-b4b9-0242ac110005: Got expected result from replica 1 [my-hostname-basic-ec23b22d-4d87-11ea-b4b9-0242ac110005-6fq9d]: "my-hostname-basic-ec23b22d-4d87-11ea-b4b9-0242ac110005-6fq9d", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:08:14.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-cpsvz" for this suite. Feb 12 11:08:22.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:08:25.432: INFO: namespace: e2e-tests-replication-controller-cpsvz, resource: bindings, ignored listing per whitelist Feb 12 11:08:25.736: INFO: namespace e2e-tests-replication-controller-cpsvz deletion completed in 10.976370994s • [SLOW TEST:29.504 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:08:25.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-fddbf1c0-4d87-11ea-b4b9-0242ac110005 STEP: Creating a pod to test consume secrets Feb 12 11:08:26.376: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fddd104c-4d87-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-qdlqw" to be "success or failure" Feb 12 11:08:28.048: INFO: Pod "pod-projected-secrets-fddd104c-4d87-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 1.671929448s Feb 12 11:08:30.066: INFO: Pod "pod-projected-secrets-fddd104c-4d87-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.690010366s Feb 12 11:08:32.080: INFO: Pod "pod-projected-secrets-fddd104c-4d87-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.703767462s Feb 12 11:08:34.095: INFO: Pod "pod-projected-secrets-fddd104c-4d87-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.718959846s Feb 12 11:08:36.115: INFO: Pod "pod-projected-secrets-fddd104c-4d87-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.738538104s Feb 12 11:08:38.137: INFO: Pod "pod-projected-secrets-fddd104c-4d87-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.760454468s STEP: Saw pod success Feb 12 11:08:38.137: INFO: Pod "pod-projected-secrets-fddd104c-4d87-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:08:38.144: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-fddd104c-4d87-11ea-b4b9-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Feb 12 11:08:38.375: INFO: Waiting for pod pod-projected-secrets-fddd104c-4d87-11ea-b4b9-0242ac110005 to disappear Feb 12 11:08:38.386: INFO: Pod pod-projected-secrets-fddd104c-4d87-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:08:38.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qdlqw" for this suite. Feb 12 11:08:44.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:08:44.772: INFO: namespace: e2e-tests-projected-qdlqw, resource: bindings, ignored listing per whitelist Feb 12 11:08:44.782: INFO: namespace e2e-tests-projected-qdlqw deletion completed in 6.389189615s • [SLOW TEST:19.044 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:08:44.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 12 11:08:45.101: INFO: Waiting up to 5m0s for pod "downward-api-090b287b-4d88-11ea-b4b9-0242ac110005" in namespace "e2e-tests-downward-api-g4q7q" to be "success or failure" Feb 12 11:08:45.108: INFO: Pod "downward-api-090b287b-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.541046ms Feb 12 11:08:47.130: INFO: Pod "downward-api-090b287b-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029323586s Feb 12 11:08:49.153: INFO: Pod "downward-api-090b287b-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052374788s Feb 12 11:08:51.211: INFO: Pod "downward-api-090b287b-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110314917s Feb 12 11:08:53.220: INFO: Pod "downward-api-090b287b-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118630466s Feb 12 11:08:55.242: INFO: Pod "downward-api-090b287b-4d88-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.141252375s STEP: Saw pod success Feb 12 11:08:55.242: INFO: Pod "downward-api-090b287b-4d88-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:08:55.250: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-090b287b-4d88-11ea-b4b9-0242ac110005 container dapi-container: STEP: delete the pod Feb 12 11:08:56.332: INFO: Waiting for pod downward-api-090b287b-4d88-11ea-b4b9-0242ac110005 to disappear Feb 12 11:08:56.357: INFO: Pod downward-api-090b287b-4d88-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:08:56.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-g4q7q" for this suite. Feb 12 11:09:02.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:09:02.650: INFO: namespace: e2e-tests-downward-api-g4q7q, resource: bindings, ignored listing per whitelist Feb 12 11:09:02.737: INFO: namespace e2e-tests-downward-api-g4q7q deletion completed in 6.189604444s • [SLOW TEST:17.954 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:09:02.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Feb 12 11:09:03.061: INFO: Waiting up to 5m0s for pod "var-expansion-13b746d1-4d88-11ea-b4b9-0242ac110005" in namespace "e2e-tests-var-expansion-whpzj" to be "success or failure" Feb 12 11:09:03.100: INFO: Pod "var-expansion-13b746d1-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.292194ms Feb 12 11:09:05.127: INFO: Pod "var-expansion-13b746d1-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065319384s Feb 12 11:09:07.153: INFO: Pod "var-expansion-13b746d1-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092121706s Feb 12 11:09:09.165: INFO: Pod "var-expansion-13b746d1-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10330382s Feb 12 11:09:11.647: INFO: Pod "var-expansion-13b746d1-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.586128797s Feb 12 11:09:13.661: INFO: Pod "var-expansion-13b746d1-4d88-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.599528598s STEP: Saw pod success Feb 12 11:09:13.661: INFO: Pod "var-expansion-13b746d1-4d88-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:09:13.666: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-13b746d1-4d88-11ea-b4b9-0242ac110005 container dapi-container: STEP: delete the pod Feb 12 11:09:14.270: INFO: Waiting for pod var-expansion-13b746d1-4d88-11ea-b4b9-0242ac110005 to disappear Feb 12 11:09:14.330: INFO: Pod var-expansion-13b746d1-4d88-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:09:14.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-whpzj" for this suite. Feb 12 11:09:20.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:09:20.529: INFO: namespace: e2e-tests-var-expansion-whpzj, resource: bindings, ignored listing per whitelist Feb 12 11:09:20.902: INFO: namespace e2e-tests-var-expansion-whpzj deletion completed in 6.499773416s • [SLOW TEST:18.165 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:09:20.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 12 11:09:21.267: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:09:48.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-c4ngm" for this suite. Feb 12 11:10:14.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:10:14.431: INFO: namespace: e2e-tests-init-container-c4ngm, resource: bindings, ignored listing per whitelist Feb 12 11:10:14.518: INFO: namespace e2e-tests-init-container-c4ngm deletion completed in 26.269100041s • [SLOW TEST:53.614 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:10:14.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Feb 12 11:10:14.791: INFO: Waiting up to 5m0s for pod "client-containers-3e7f196f-4d88-11ea-b4b9-0242ac110005" in namespace "e2e-tests-containers-lnb2s" to be "success or failure" Feb 12 11:10:14.811: INFO: Pod "client-containers-3e7f196f-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.412176ms Feb 12 11:10:16.845: INFO: Pod "client-containers-3e7f196f-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053792222s Feb 12 11:10:18.872: INFO: Pod "client-containers-3e7f196f-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080790826s Feb 12 11:10:21.366: INFO: Pod "client-containers-3e7f196f-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.574733643s Feb 12 11:10:23.378: INFO: Pod "client-containers-3e7f196f-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.586619576s Feb 12 11:10:25.484: INFO: Pod "client-containers-3e7f196f-4d88-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.692278585s STEP: Saw pod success Feb 12 11:10:25.484: INFO: Pod "client-containers-3e7f196f-4d88-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:10:25.538: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-3e7f196f-4d88-11ea-b4b9-0242ac110005 container test-container: STEP: delete the pod Feb 12 11:10:26.531: INFO: Waiting for pod client-containers-3e7f196f-4d88-11ea-b4b9-0242ac110005 to disappear Feb 12 11:10:26.786: INFO: Pod client-containers-3e7f196f-4d88-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:10:26.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-lnb2s" for this suite. Feb 12 11:10:34.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:10:35.098: INFO: namespace: e2e-tests-containers-lnb2s, resource: bindings, ignored listing per whitelist Feb 12 11:10:35.114: INFO: namespace e2e-tests-containers-lnb2s deletion completed in 8.302573713s • [SLOW TEST:20.595 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:10:35.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 12 11:10:35.255: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:10:36.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-9v4jf" for this suite. Feb 12 11:10:42.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:10:42.795: INFO: namespace: e2e-tests-custom-resource-definition-9v4jf, resource: bindings, ignored listing per whitelist Feb 12 11:10:42.891: INFO: namespace e2e-tests-custom-resource-definition-9v4jf deletion completed in 6.250712185s • [SLOW TEST:7.777 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:10:42.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 12 11:10:43.109: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f624869-4d88-11ea-b4b9-0242ac110005" in namespace "e2e-tests-downward-api-k7kb9" to be "success or failure" Feb 12 11:10:43.126: INFO: Pod "downwardapi-volume-4f624869-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.702644ms Feb 12 11:10:45.151: INFO: Pod "downwardapi-volume-4f624869-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042183196s Feb 12 11:10:47.164: INFO: Pod "downwardapi-volume-4f624869-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055112014s Feb 12 11:10:49.188: INFO: Pod "downwardapi-volume-4f624869-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078767991s Feb 12 11:10:51.200: INFO: Pod "downwardapi-volume-4f624869-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091547819s Feb 12 11:10:53.224: INFO: Pod "downwardapi-volume-4f624869-4d88-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114885251s STEP: Saw pod success Feb 12 11:10:53.224: INFO: Pod "downwardapi-volume-4f624869-4d88-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:10:53.231: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4f624869-4d88-11ea-b4b9-0242ac110005 container client-container: STEP: delete the pod Feb 12 11:10:53.437: INFO: Waiting for pod downwardapi-volume-4f624869-4d88-11ea-b4b9-0242ac110005 to disappear Feb 12 11:10:53.476: INFO: Pod downwardapi-volume-4f624869-4d88-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:10:53.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-k7kb9" for this suite. Feb 12 11:11:00.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:11:01.034: INFO: namespace: e2e-tests-downward-api-k7kb9, resource: bindings, ignored listing per whitelist Feb 12 11:11:01.060: INFO: namespace e2e-tests-downward-api-k7kb9 deletion completed in 7.557628997s • [SLOW TEST:18.168 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:11:01.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 12 11:11:01.235: INFO: Creating ReplicaSet my-hostname-basic-5a3230f2-4d88-11ea-b4b9-0242ac110005 Feb 12 11:11:01.293: INFO: Pod name my-hostname-basic-5a3230f2-4d88-11ea-b4b9-0242ac110005: Found 0 pods out of 1 Feb 12 11:11:06.318: INFO: Pod name my-hostname-basic-5a3230f2-4d88-11ea-b4b9-0242ac110005: Found 1 pods out of 1 Feb 12 11:11:06.318: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5a3230f2-4d88-11ea-b4b9-0242ac110005" is running Feb 12 11:11:12.339: INFO: Pod "my-hostname-basic-5a3230f2-4d88-11ea-b4b9-0242ac110005-vv5j9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 11:11:01 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 11:11:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5a3230f2-4d88-11ea-b4b9-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 11:11:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5a3230f2-4d88-11ea-b4b9-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 11:11:01 +0000 UTC Reason: Message:}]) Feb 12 11:11:12.339: INFO: Trying to dial the pod Feb 12 11:11:17.396: INFO: Controller my-hostname-basic-5a3230f2-4d88-11ea-b4b9-0242ac110005: Got expected result from replica 1 [my-hostname-basic-5a3230f2-4d88-11ea-b4b9-0242ac110005-vv5j9]: "my-hostname-basic-5a3230f2-4d88-11ea-b4b9-0242ac110005-vv5j9", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:11:17.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-7d9mc" for this suite. Feb 12 11:11:26.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:11:26.984: INFO: namespace: e2e-tests-replicaset-7d9mc, resource: bindings, ignored listing per whitelist Feb 12 11:11:27.144: INFO: namespace e2e-tests-replicaset-7d9mc deletion completed in 9.737297932s • [SLOW TEST:26.084 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:11:27.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 12 11:11:27.368: INFO: Waiting up to 5m0s for pod "pod-69c26401-4d88-11ea-b4b9-0242ac110005" in namespace "e2e-tests-emptydir-8g8j2" to be "success or failure" Feb 12 11:11:27.481: INFO: Pod "pod-69c26401-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 112.325931ms Feb 12 11:11:29.501: INFO: Pod "pod-69c26401-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132842167s Feb 12 11:11:31.517: INFO: Pod "pod-69c26401-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148214389s Feb 12 11:11:33.890: INFO: Pod "pod-69c26401-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.521452101s Feb 12 11:11:35.919: INFO: Pod "pod-69c26401-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550058512s Feb 12 11:11:37.944: INFO: Pod "pod-69c26401-4d88-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.575458489s STEP: Saw pod success Feb 12 11:11:37.944: INFO: Pod "pod-69c26401-4d88-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:11:37.959: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-69c26401-4d88-11ea-b4b9-0242ac110005 container test-container: STEP: delete the pod Feb 12 11:11:38.202: INFO: Waiting for pod pod-69c26401-4d88-11ea-b4b9-0242ac110005 to disappear Feb 12 11:11:38.246: INFO: Pod pod-69c26401-4d88-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:11:38.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8g8j2" for this suite. Feb 12 11:11:44.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:11:44.674: INFO: namespace: e2e-tests-emptydir-8g8j2, resource: bindings, ignored listing per whitelist Feb 12 11:11:44.733: INFO: namespace e2e-tests-emptydir-8g8j2 deletion completed in 6.461025906s • [SLOW TEST:17.588 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:11:44.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 12 11:12:00.091: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:12:01.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-g9wb7" for this suite. Feb 12 11:12:29.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:12:29.711: INFO: namespace: e2e-tests-replicaset-g9wb7, resource: bindings, ignored listing per whitelist Feb 12 11:12:29.805: INFO: namespace e2e-tests-replicaset-g9wb7 deletion completed in 28.612635717s • [SLOW TEST:45.072 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:12:29.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 12 11:12:30.227: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f2e1938-4d88-11ea-b4b9-0242ac110005" in namespace "e2e-tests-downward-api-p2765" to be "success or failure" Feb 12 11:12:30.261: INFO: Pod "downwardapi-volume-8f2e1938-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.781495ms Feb 12 11:12:32.276: INFO: Pod "downwardapi-volume-8f2e1938-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048438525s Feb 12 11:12:34.303: INFO: Pod "downwardapi-volume-8f2e1938-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075281668s Feb 12 11:12:36.323: INFO: Pod "downwardapi-volume-8f2e1938-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095042379s Feb 12 11:12:38.432: INFO: Pod "downwardapi-volume-8f2e1938-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.204647649s Feb 12 11:12:40.463: INFO: Pod "downwardapi-volume-8f2e1938-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.235440359s Feb 12 11:12:42.513: INFO: Pod "downwardapi-volume-8f2e1938-4d88-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.285504714s STEP: Saw pod success Feb 12 11:12:42.513: INFO: Pod "downwardapi-volume-8f2e1938-4d88-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:12:42.570: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8f2e1938-4d88-11ea-b4b9-0242ac110005 container client-container: STEP: delete the pod Feb 12 11:12:42.703: INFO: Waiting for pod downwardapi-volume-8f2e1938-4d88-11ea-b4b9-0242ac110005 to disappear Feb 12 11:12:42.711: INFO: Pod downwardapi-volume-8f2e1938-4d88-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:12:42.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-p2765" for this suite. Feb 12 11:12:48.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:12:48.937: INFO: namespace: e2e-tests-downward-api-p2765, resource: bindings, ignored listing per whitelist Feb 12 11:12:49.128: INFO: namespace e2e-tests-downward-api-p2765 deletion completed in 6.411647722s • [SLOW TEST:19.323 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:12:49.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-9a971996-4d88-11ea-b4b9-0242ac110005 STEP: Creating a pod to test consume secrets Feb 12 11:12:49.349: INFO: Waiting up to 5m0s for pod "pod-secrets-9a9f5282-4d88-11ea-b4b9-0242ac110005" in namespace "e2e-tests-secrets-kpv4v" to be "success or failure" Feb 12 11:12:49.376: INFO: Pod "pod-secrets-9a9f5282-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.783929ms Feb 12 11:12:51.674: INFO: Pod "pod-secrets-9a9f5282-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32403686s Feb 12 11:12:53.738: INFO: Pod "pod-secrets-9a9f5282-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.388374795s Feb 12 11:12:55.929: INFO: Pod "pod-secrets-9a9f5282-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579962814s Feb 12 11:12:58.017: INFO: Pod "pod-secrets-9a9f5282-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.667343075s Feb 12 11:13:00.030: INFO: Pod "pod-secrets-9a9f5282-4d88-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.680854574s STEP: Saw pod success Feb 12 11:13:00.030: INFO: Pod "pod-secrets-9a9f5282-4d88-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:13:00.036: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-9a9f5282-4d88-11ea-b4b9-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 12 11:13:00.778: INFO: Waiting for pod pod-secrets-9a9f5282-4d88-11ea-b4b9-0242ac110005 to disappear Feb 12 11:13:01.356: INFO: Pod pod-secrets-9a9f5282-4d88-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:13:01.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-kpv4v" for this suite. Feb 12 11:13:07.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:13:07.603: INFO: namespace: e2e-tests-secrets-kpv4v, resource: bindings, ignored listing per whitelist Feb 12 11:13:07.713: INFO: namespace e2e-tests-secrets-kpv4v deletion completed in 6.344151692s • [SLOW TEST:18.584 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:13:07.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-a5bbc574-4d88-11ea-b4b9-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 12 11:13:08.023: INFO: Waiting up to 5m0s for pod "pod-configmaps-a5c1f5a8-4d88-11ea-b4b9-0242ac110005" in namespace "e2e-tests-configmap-494z2" to be "success or failure" Feb 12 11:13:08.120: INFO: Pod "pod-configmaps-a5c1f5a8-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 96.827723ms Feb 12 11:13:10.254: INFO: Pod "pod-configmaps-a5c1f5a8-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230798184s Feb 12 11:13:12.278: INFO: Pod "pod-configmaps-a5c1f5a8-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254865876s Feb 12 11:13:14.677: INFO: Pod "pod-configmaps-a5c1f5a8-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.654023939s Feb 12 11:13:16.702: INFO: Pod "pod-configmaps-a5c1f5a8-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.678842742s Feb 12 11:13:18.744: INFO: Pod "pod-configmaps-a5c1f5a8-4d88-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.720980102s STEP: Saw pod success Feb 12 11:13:18.744: INFO: Pod "pod-configmaps-a5c1f5a8-4d88-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:13:18.765: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a5c1f5a8-4d88-11ea-b4b9-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 12 11:13:19.026: INFO: Waiting for pod pod-configmaps-a5c1f5a8-4d88-11ea-b4b9-0242ac110005 to disappear Feb 12 11:13:19.083: INFO: Pod pod-configmaps-a5c1f5a8-4d88-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:13:19.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-494z2" for this suite. Feb 12 11:13:25.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:13:25.300: INFO: namespace: e2e-tests-configmap-494z2, resource: bindings, ignored listing per whitelist Feb 12 11:13:25.374: INFO: namespace e2e-tests-configmap-494z2 deletion completed in 6.203579757s • [SLOW TEST:17.661 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:13:25.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 12 11:13:25.647: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-kgqf7,SelfLink:/api/v1/namespaces/e2e-tests-watch-kgqf7/configmaps/e2e-watch-test-label-changed,UID:b0310fcd-4d88-11ea-a994-fa163e34d433,ResourceVersion:21411596,Generation:0,CreationTimestamp:2020-02-12 11:13:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 12 11:13:25.647: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-kgqf7,SelfLink:/api/v1/namespaces/e2e-tests-watch-kgqf7/configmaps/e2e-watch-test-label-changed,UID:b0310fcd-4d88-11ea-a994-fa163e34d433,ResourceVersion:21411597,Generation:0,CreationTimestamp:2020-02-12 11:13:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 12 11:13:25.647: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-kgqf7,SelfLink:/api/v1/namespaces/e2e-tests-watch-kgqf7/configmaps/e2e-watch-test-label-changed,UID:b0310fcd-4d88-11ea-a994-fa163e34d433,ResourceVersion:21411598,Generation:0,CreationTimestamp:2020-02-12 11:13:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 12 11:13:36.062: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-kgqf7,SelfLink:/api/v1/namespaces/e2e-tests-watch-kgqf7/configmaps/e2e-watch-test-label-changed,UID:b0310fcd-4d88-11ea-a994-fa163e34d433,ResourceVersion:21411612,Generation:0,CreationTimestamp:2020-02-12 11:13:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 12 11:13:36.064: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-kgqf7,SelfLink:/api/v1/namespaces/e2e-tests-watch-kgqf7/configmaps/e2e-watch-test-label-changed,UID:b0310fcd-4d88-11ea-a994-fa163e34d433,ResourceVersion:21411613,Generation:0,CreationTimestamp:2020-02-12 11:13:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Feb 12 11:13:36.064: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-kgqf7,SelfLink:/api/v1/namespaces/e2e-tests-watch-kgqf7/configmaps/e2e-watch-test-label-changed,UID:b0310fcd-4d88-11ea-a994-fa163e34d433,ResourceVersion:21411615,Generation:0,CreationTimestamp:2020-02-12 11:13:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:13:36.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-kgqf7" for this suite. Feb 12 11:13:42.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:13:42.221: INFO: namespace: e2e-tests-watch-kgqf7, resource: bindings, ignored listing per whitelist Feb 12 11:13:42.314: INFO: namespace e2e-tests-watch-kgqf7 deletion completed in 6.236174374s • [SLOW TEST:16.939 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:13:42.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 12 11:13:42.673: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba63bc10-4d88-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-n5dpn" to be "success or failure" Feb 12 11:13:42.687: INFO: Pod "downwardapi-volume-ba63bc10-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.663095ms Feb 12 11:13:44.761: INFO: Pod "downwardapi-volume-ba63bc10-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088142203s Feb 12 11:13:46.780: INFO: Pod "downwardapi-volume-ba63bc10-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106760101s Feb 12 11:13:48.798: INFO: Pod "downwardapi-volume-ba63bc10-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124553275s Feb 12 11:13:50.913: INFO: Pod "downwardapi-volume-ba63bc10-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.24005416s Feb 12 11:13:52.926: INFO: Pod "downwardapi-volume-ba63bc10-4d88-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.252594975s STEP: Saw pod success Feb 12 11:13:52.926: INFO: Pod "downwardapi-volume-ba63bc10-4d88-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:13:52.929: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ba63bc10-4d88-11ea-b4b9-0242ac110005 container client-container: STEP: delete the pod Feb 12 11:13:54.079: INFO: Waiting for pod downwardapi-volume-ba63bc10-4d88-11ea-b4b9-0242ac110005 to disappear Feb 12 11:13:54.100: INFO: Pod downwardapi-volume-ba63bc10-4d88-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:13:54.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-n5dpn" for this suite. Feb 12 11:14:00.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:14:00.413: INFO: namespace: e2e-tests-projected-n5dpn, resource: bindings, ignored listing per whitelist Feb 12 11:14:00.517: INFO: namespace e2e-tests-projected-n5dpn deletion completed in 6.398767581s • [SLOW TEST:18.202 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:14:00.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-c5318590-4d88-11ea-b4b9-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 12 11:14:00.869: INFO: Waiting up to 5m0s for pod "pod-configmaps-c53eebef-4d88-11ea-b4b9-0242ac110005" in namespace "e2e-tests-configmap-4qt75" to be "success or failure" Feb 12 11:14:00.917: INFO: Pod "pod-configmaps-c53eebef-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 47.937928ms Feb 12 11:14:03.247: INFO: Pod "pod-configmaps-c53eebef-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.377829283s Feb 12 11:14:05.265: INFO: Pod "pod-configmaps-c53eebef-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.39606343s Feb 12 11:14:07.334: INFO: Pod "pod-configmaps-c53eebef-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.465076087s Feb 12 11:14:09.349: INFO: Pod "pod-configmaps-c53eebef-4d88-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.479783958s Feb 12 11:14:11.425: INFO: Pod "pod-configmaps-c53eebef-4d88-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.556183254s STEP: Saw pod success Feb 12 11:14:11.425: INFO: Pod "pod-configmaps-c53eebef-4d88-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:14:11.543: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c53eebef-4d88-11ea-b4b9-0242ac110005 container configmap-volume-test: STEP: delete the pod Feb 12 11:14:11.741: INFO: Waiting for pod pod-configmaps-c53eebef-4d88-11ea-b4b9-0242ac110005 to disappear Feb 12 11:14:11.760: INFO: Pod pod-configmaps-c53eebef-4d88-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:14:11.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4qt75" for this suite. Feb 12 11:14:20.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:14:20.486: INFO: namespace: e2e-tests-configmap-4qt75, resource: bindings, ignored listing per whitelist Feb 12 11:14:20.597: INFO: namespace e2e-tests-configmap-4qt75 deletion completed in 8.818003874s • [SLOW TEST:20.080 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:14:20.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-nkb84 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Feb 12 11:14:20.966: INFO: Found 0 stateful pods, waiting for 3 Feb 12 11:14:31.026: INFO: Found 1 stateful pods, waiting for 3 Feb 12 11:14:41.042: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 12 11:14:41.042: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 12 11:14:41.042: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 12 11:14:50.985: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 12 11:14:50.985: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 12 11:14:50.985: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 12 11:14:50.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkb84 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 12 11:14:51.678: INFO: stderr: "I0212 11:14:51.234368 500 log.go:172] (0xc0001aa0b0) (0xc0002dd360) Create stream\nI0212 11:14:51.234986 500 log.go:172] (0xc0001aa0b0) (0xc0002dd360) Stream added, broadcasting: 1\nI0212 11:14:51.241343 500 log.go:172] (0xc0001aa0b0) Reply frame received for 1\nI0212 11:14:51.241383 500 log.go:172] (0xc0001aa0b0) (0xc0002dd400) Create stream\nI0212 11:14:51.241394 500 log.go:172] (0xc0001aa0b0) (0xc0002dd400) Stream added, broadcasting: 3\nI0212 11:14:51.242526 500 log.go:172] (0xc0001aa0b0) Reply frame received for 3\nI0212 11:14:51.242575 500 log.go:172] (0xc0001aa0b0) (0xc0002dd4a0) Create stream\nI0212 11:14:51.242586 500 log.go:172] (0xc0001aa0b0) (0xc0002dd4a0) Stream added, broadcasting: 5\nI0212 11:14:51.243458 500 log.go:172] (0xc0001aa0b0) Reply frame received for 5\nI0212 11:14:51.421749 500 log.go:172] (0xc0001aa0b0) Data frame received for 3\nI0212 11:14:51.421933 500 log.go:172] (0xc0002dd400) (3) Data frame handling\nI0212 11:14:51.421978 500 log.go:172] (0xc0002dd400) (3) Data frame sent\nI0212 11:14:51.656593 500 log.go:172] (0xc0001aa0b0) Data frame received for 1\nI0212 11:14:51.656903 500 log.go:172] (0xc0001aa0b0) (0xc0002dd400) Stream removed, broadcasting: 3\nI0212 11:14:51.657005 500 log.go:172] (0xc0002dd360) (1) Data frame handling\nI0212 11:14:51.657050 500 log.go:172] (0xc0002dd360) (1) Data frame sent\nI0212 11:14:51.657237 500 log.go:172] (0xc0001aa0b0) (0xc0002dd4a0) Stream removed, broadcasting: 5\nI0212 11:14:51.657322 500 log.go:172] (0xc0001aa0b0) (0xc0002dd360) Stream removed, broadcasting: 1\nI0212 11:14:51.657357 500 log.go:172] (0xc0001aa0b0) Go away received\nI0212 11:14:51.659162 500 log.go:172] (0xc0001aa0b0) (0xc0002dd360) Stream removed, broadcasting: 1\nI0212 11:14:51.659277 500 log.go:172] (0xc0001aa0b0) (0xc0002dd400) Stream removed, broadcasting: 3\nI0212 11:14:51.659289 500 log.go:172] (0xc0001aa0b0) (0xc0002dd4a0) Stream removed, broadcasting: 5\n" Feb 12 11:14:51.679: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 12 11:14:51.679: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 12 11:14:51.802: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Feb 12 11:15:01.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkb84 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 11:15:02.693: INFO: stderr: "I0212 11:15:02.124640 522 log.go:172] (0xc00014c630) (0xc0005d7540) Create stream\nI0212 11:15:02.124943 522 log.go:172] (0xc00014c630) (0xc0005d7540) Stream added, broadcasting: 1\nI0212 11:15:02.133169 522 log.go:172] (0xc00014c630) Reply frame received for 1\nI0212 11:15:02.133282 522 log.go:172] (0xc00014c630) (0xc000130000) Create stream\nI0212 11:15:02.133300 522 log.go:172] (0xc00014c630) (0xc000130000) Stream added, broadcasting: 3\nI0212 11:15:02.134346 522 log.go:172] (0xc00014c630) Reply frame received for 3\nI0212 11:15:02.134371 522 log.go:172] (0xc00014c630) (0xc0005d75e0) Create stream\nI0212 11:15:02.134382 522 log.go:172] (0xc00014c630) (0xc0005d75e0) Stream added, broadcasting: 5\nI0212 11:15:02.135733 522 log.go:172] (0xc00014c630) Reply frame received for 5\nI0212 11:15:02.305996 522 log.go:172] (0xc00014c630) Data frame received for 3\nI0212 11:15:02.306224 522 log.go:172] (0xc000130000) (3) Data frame handling\nI0212 11:15:02.306329 522 log.go:172] (0xc000130000) (3) Data frame sent\nI0212 11:15:02.677742 522 log.go:172] (0xc00014c630) (0xc0005d75e0) Stream removed, broadcasting: 5\nI0212 11:15:02.677981 522 log.go:172] (0xc00014c630) Data frame received for 1\nI0212 11:15:02.678168 522 log.go:172] (0xc00014c630) (0xc000130000) Stream removed, broadcasting: 3\nI0212 11:15:02.678233 522 log.go:172] (0xc0005d7540) (1) Data frame handling\nI0212 11:15:02.678269 522 log.go:172] (0xc0005d7540) (1) Data frame sent\nI0212 11:15:02.678340 522 log.go:172] (0xc00014c630) (0xc0005d7540) Stream removed, broadcasting: 1\nI0212 11:15:02.678389 522 log.go:172] (0xc00014c630) Go away received\nI0212 11:15:02.679255 522 log.go:172] (0xc00014c630) (0xc0005d7540) Stream removed, broadcasting: 1\nI0212 11:15:02.679276 522 log.go:172] (0xc00014c630) (0xc000130000) Stream removed, broadcasting: 3\nI0212 11:15:02.679285 522 log.go:172] (0xc00014c630) (0xc0005d75e0) Stream removed, broadcasting: 5\n" Feb 12 11:15:02.693: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 12 11:15:02.693: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 12 11:15:02.967: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkb84/ss2 to complete update Feb 12 11:15:02.967: INFO: Waiting for Pod e2e-tests-statefulset-nkb84/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 12 11:15:02.967: INFO: Waiting for Pod e2e-tests-statefulset-nkb84/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 12 11:15:02.967: INFO: Waiting for Pod e2e-tests-statefulset-nkb84/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 12 11:15:12.995: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkb84/ss2 to complete update Feb 12 11:15:12.995: INFO: Waiting for Pod e2e-tests-statefulset-nkb84/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 12 11:15:12.995: INFO: Waiting for Pod e2e-tests-statefulset-nkb84/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 12 11:15:23.032: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkb84/ss2 to complete update Feb 12 11:15:23.032: INFO: Waiting for Pod e2e-tests-statefulset-nkb84/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 12 11:15:23.032: INFO: Waiting for Pod e2e-tests-statefulset-nkb84/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 12 11:15:32.990: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkb84/ss2 to complete update Feb 12 11:15:32.991: INFO: Waiting for Pod e2e-tests-statefulset-nkb84/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 12 11:15:42.998: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkb84/ss2 to complete update Feb 12 11:15:42.998: INFO: Waiting for Pod e2e-tests-statefulset-nkb84/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 12 11:15:53.106: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkb84/ss2 to complete update STEP: Rolling back to a previous revision Feb 12 11:16:03.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkb84 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 12 11:16:03.603: INFO: stderr: "I0212 11:16:03.234170 544 log.go:172] (0xc000778370) (0xc00066f360) Create stream\nI0212 11:16:03.234534 544 log.go:172] (0xc000778370) (0xc00066f360) Stream added, broadcasting: 1\nI0212 11:16:03.243025 544 log.go:172] (0xc000778370) Reply frame received for 1\nI0212 11:16:03.243081 544 log.go:172] (0xc000778370) (0xc0007d2000) Create stream\nI0212 11:16:03.243119 544 log.go:172] (0xc000778370) (0xc0007d2000) Stream added, broadcasting: 3\nI0212 11:16:03.244171 544 log.go:172] (0xc000778370) Reply frame received for 3\nI0212 11:16:03.244220 544 log.go:172] (0xc000778370) (0xc0007a2000) Create stream\nI0212 11:16:03.244252 544 log.go:172] (0xc000778370) (0xc0007a2000) Stream added, broadcasting: 5\nI0212 11:16:03.245569 544 log.go:172] (0xc000778370) Reply frame received for 5\nI0212 11:16:03.410443 544 log.go:172] (0xc000778370) Data frame received for 3\nI0212 11:16:03.410618 544 log.go:172] (0xc0007d2000) (3) Data frame handling\nI0212 11:16:03.410655 544 log.go:172] (0xc0007d2000) (3) Data frame sent\nI0212 11:16:03.574066 544 log.go:172] (0xc000778370) (0xc0007d2000) Stream removed, broadcasting: 3\nI0212 11:16:03.574623 544 log.go:172] (0xc000778370) Data frame received for 1\nI0212 11:16:03.574676 544 log.go:172] (0xc00066f360) (1) Data frame handling\nI0212 11:16:03.574735 544 log.go:172] (0xc00066f360) (1) Data frame sent\nI0212 11:16:03.574770 544 log.go:172] (0xc000778370) (0xc00066f360) Stream removed, broadcasting: 1\nI0212 11:16:03.575337 544 log.go:172] (0xc000778370) (0xc0007a2000) Stream removed, broadcasting: 5\nI0212 11:16:03.575575 544 log.go:172] (0xc000778370) Go away received\nI0212 11:16:03.576129 544 log.go:172] (0xc000778370) (0xc00066f360) Stream removed, broadcasting: 1\nI0212 11:16:03.576161 544 log.go:172] (0xc000778370) (0xc0007d2000) Stream removed, broadcasting: 3\nI0212 11:16:03.576190 544 log.go:172] (0xc000778370) (0xc0007a2000) Stream removed, broadcasting: 5\n" Feb 12 11:16:03.604: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 12 11:16:03.604: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 12 11:16:13.749: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Feb 12 11:16:23.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkb84 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 11:16:24.627: INFO: stderr: "I0212 11:16:24.179191 567 log.go:172] (0xc0007c02c0) (0xc0005de5a0) Create stream\nI0212 11:16:24.179549 567 log.go:172] (0xc0007c02c0) (0xc0005de5a0) Stream added, broadcasting: 1\nI0212 11:16:24.188265 567 log.go:172] (0xc0007c02c0) Reply frame received for 1\nI0212 11:16:24.188313 567 log.go:172] (0xc0007c02c0) (0xc0004cedc0) Create stream\nI0212 11:16:24.188323 567 log.go:172] (0xc0007c02c0) (0xc0004cedc0) Stream added, broadcasting: 3\nI0212 11:16:24.189295 567 log.go:172] (0xc0007c02c0) Reply frame received for 3\nI0212 11:16:24.189331 567 log.go:172] (0xc0007c02c0) (0xc00086e0a0) Create stream\nI0212 11:16:24.189345 567 log.go:172] (0xc0007c02c0) (0xc00086e0a0) Stream added, broadcasting: 5\nI0212 11:16:24.190416 567 log.go:172] (0xc0007c02c0) Reply frame received for 5\nI0212 11:16:24.382328 567 log.go:172] (0xc0007c02c0) Data frame received for 3\nI0212 11:16:24.382581 567 log.go:172] (0xc0004cedc0) (3) Data frame handling\nI0212 11:16:24.382628 567 log.go:172] (0xc0004cedc0) (3) Data frame sent\nI0212 11:16:24.606251 567 log.go:172] (0xc0007c02c0) Data frame received for 1\nI0212 11:16:24.606482 567 log.go:172] (0xc0007c02c0) (0xc0004cedc0) Stream removed, broadcasting: 3\nI0212 11:16:24.606576 567 log.go:172] (0xc0005de5a0) (1) Data frame handling\nI0212 11:16:24.606645 567 log.go:172] (0xc0005de5a0) (1) Data frame sent\nI0212 11:16:24.606677 567 log.go:172] (0xc0007c02c0) (0xc00086e0a0) Stream removed, broadcasting: 5\nI0212 11:16:24.606712 567 log.go:172] (0xc0007c02c0) (0xc0005de5a0) Stream removed, broadcasting: 1\nI0212 11:16:24.606754 567 log.go:172] (0xc0007c02c0) Go away received\nI0212 11:16:24.607786 567 log.go:172] (0xc0007c02c0) (0xc0005de5a0) Stream removed, broadcasting: 1\nI0212 11:16:24.607841 567 log.go:172] (0xc0007c02c0) (0xc0004cedc0) Stream removed, broadcasting: 3\nI0212 11:16:24.607849 567 log.go:172] (0xc0007c02c0) (0xc00086e0a0) Stream removed, broadcasting: 5\n" Feb 12 11:16:24.627: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 12 11:16:24.627: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 12 11:16:34.684: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkb84/ss2 to complete update Feb 12 11:16:34.684: INFO: Waiting for Pod e2e-tests-statefulset-nkb84/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 12 11:16:34.684: INFO: Waiting for Pod e2e-tests-statefulset-nkb84/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 12 11:16:48.083: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkb84/ss2 to complete update Feb 12 11:16:48.083: INFO: Waiting for Pod e2e-tests-statefulset-nkb84/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 12 11:16:48.083: INFO: Waiting for Pod e2e-tests-statefulset-nkb84/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 12 11:16:54.800: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkb84/ss2 to complete update Feb 12 11:16:54.800: INFO: Waiting for Pod e2e-tests-statefulset-nkb84/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 12 11:17:05.814: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkb84/ss2 to complete update Feb 12 11:17:05.815: INFO: Waiting for Pod e2e-tests-statefulset-nkb84/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 12 11:17:14.723: INFO: Waiting for StatefulSet e2e-tests-statefulset-nkb84/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 12 11:17:24.732: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nkb84 Feb 12 11:17:24.741: INFO: Scaling statefulset ss2 to 0 Feb 12 11:17:54.785: INFO: Waiting for statefulset status.replicas updated to 0 Feb 12 11:17:54.793: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:17:54.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-nkb84" for this suite. Feb 12 11:18:02.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:18:03.119: INFO: namespace: e2e-tests-statefulset-nkb84, resource: bindings, ignored listing per whitelist Feb 12 11:18:03.149: INFO: namespace e2e-tests-statefulset-nkb84 deletion completed in 8.25717042s • [SLOW TEST:222.551 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:18:03.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Feb 12 11:18:03.443: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-wbq26,SelfLink:/api/v1/namespaces/e2e-tests-watch-wbq26/configmaps/e2e-watch-test-configmap-a,UID:55d7a42b-4d89-11ea-a994-fa163e34d433,ResourceVersion:21412334,Generation:0,CreationTimestamp:2020-02-12 11:18:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 12 11:18:03.443: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-wbq26,SelfLink:/api/v1/namespaces/e2e-tests-watch-wbq26/configmaps/e2e-watch-test-configmap-a,UID:55d7a42b-4d89-11ea-a994-fa163e34d433,ResourceVersion:21412334,Generation:0,CreationTimestamp:2020-02-12 11:18:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Feb 12 11:18:16.644: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-wbq26,SelfLink:/api/v1/namespaces/e2e-tests-watch-wbq26/configmaps/e2e-watch-test-configmap-a,UID:55d7a42b-4d89-11ea-a994-fa163e34d433,ResourceVersion:21412349,Generation:0,CreationTimestamp:2020-02-12 11:18:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 12 11:18:16.645: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-wbq26,SelfLink:/api/v1/namespaces/e2e-tests-watch-wbq26/configmaps/e2e-watch-test-configmap-a,UID:55d7a42b-4d89-11ea-a994-fa163e34d433,ResourceVersion:21412349,Generation:0,CreationTimestamp:2020-02-12 11:18:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Feb 12 11:18:26.676: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-wbq26,SelfLink:/api/v1/namespaces/e2e-tests-watch-wbq26/configmaps/e2e-watch-test-configmap-a,UID:55d7a42b-4d89-11ea-a994-fa163e34d433,ResourceVersion:21412360,Generation:0,CreationTimestamp:2020-02-12 11:18:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 12 11:18:26.676: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-wbq26,SelfLink:/api/v1/namespaces/e2e-tests-watch-wbq26/configmaps/e2e-watch-test-configmap-a,UID:55d7a42b-4d89-11ea-a994-fa163e34d433,ResourceVersion:21412360,Generation:0,CreationTimestamp:2020-02-12 11:18:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Feb 12 11:18:36.703: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-wbq26,SelfLink:/api/v1/namespaces/e2e-tests-watch-wbq26/configmaps/e2e-watch-test-configmap-a,UID:55d7a42b-4d89-11ea-a994-fa163e34d433,ResourceVersion:21412373,Generation:0,CreationTimestamp:2020-02-12 11:18:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 12 11:18:36.703: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-wbq26,SelfLink:/api/v1/namespaces/e2e-tests-watch-wbq26/configmaps/e2e-watch-test-configmap-a,UID:55d7a42b-4d89-11ea-a994-fa163e34d433,ResourceVersion:21412373,Generation:0,CreationTimestamp:2020-02-12 11:18:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Feb 12 11:18:46.765: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-wbq26,SelfLink:/api/v1/namespaces/e2e-tests-watch-wbq26/configmaps/e2e-watch-test-configmap-b,UID:6fa5813e-4d89-11ea-a994-fa163e34d433,ResourceVersion:21412386,Generation:0,CreationTimestamp:2020-02-12 11:18:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 12 11:18:46.766: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-wbq26,SelfLink:/api/v1/namespaces/e2e-tests-watch-wbq26/configmaps/e2e-watch-test-configmap-b,UID:6fa5813e-4d89-11ea-a994-fa163e34d433,ResourceVersion:21412386,Generation:0,CreationTimestamp:2020-02-12 11:18:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Feb 12 11:18:56.790: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-wbq26,SelfLink:/api/v1/namespaces/e2e-tests-watch-wbq26/configmaps/e2e-watch-test-configmap-b,UID:6fa5813e-4d89-11ea-a994-fa163e34d433,ResourceVersion:21412398,Generation:0,CreationTimestamp:2020-02-12 11:18:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 12 11:18:56.791: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-wbq26,SelfLink:/api/v1/namespaces/e2e-tests-watch-wbq26/configmaps/e2e-watch-test-configmap-b,UID:6fa5813e-4d89-11ea-a994-fa163e34d433,ResourceVersion:21412398,Generation:0,CreationTimestamp:2020-02-12 11:18:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:19:06.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-wbq26" for this suite. Feb 12 11:19:12.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:19:12.967: INFO: namespace: e2e-tests-watch-wbq26, resource: bindings, ignored listing per whitelist Feb 12 11:19:13.065: INFO: namespace e2e-tests-watch-wbq26 deletion completed in 6.24366058s • [SLOW TEST:69.915 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:19:13.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-vdcbd/configmap-test-7f81da71-4d89-11ea-b4b9-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 12 11:19:13.355: INFO: Waiting up to 5m0s for pod "pod-configmaps-7f837fd1-4d89-11ea-b4b9-0242ac110005" in namespace "e2e-tests-configmap-vdcbd" to be "success or failure" Feb 12 11:19:13.390: INFO: Pod "pod-configmaps-7f837fd1-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.664308ms Feb 12 11:19:15.593: INFO: Pod "pod-configmaps-7f837fd1-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238261495s Feb 12 11:19:17.614: INFO: Pod "pod-configmaps-7f837fd1-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259095166s Feb 12 11:19:19.700: INFO: Pod "pod-configmaps-7f837fd1-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.345032952s Feb 12 11:19:21.715: INFO: Pod "pod-configmaps-7f837fd1-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.359548395s Feb 12 11:19:24.540: INFO: Pod "pod-configmaps-7f837fd1-4d89-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.185319315s STEP: Saw pod success Feb 12 11:19:24.541: INFO: Pod "pod-configmaps-7f837fd1-4d89-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:19:24.556: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7f837fd1-4d89-11ea-b4b9-0242ac110005 container env-test: STEP: delete the pod Feb 12 11:19:24.900: INFO: Waiting for pod pod-configmaps-7f837fd1-4d89-11ea-b4b9-0242ac110005 to disappear Feb 12 11:19:24.940: INFO: Pod pod-configmaps-7f837fd1-4d89-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:19:24.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vdcbd" for this suite. Feb 12 11:19:33.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:19:33.411: INFO: namespace: e2e-tests-configmap-vdcbd, resource: bindings, ignored listing per whitelist Feb 12 11:19:33.458: INFO: namespace e2e-tests-configmap-vdcbd deletion completed in 8.499684066s • [SLOW TEST:20.393 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:19:33.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 12 11:19:33.808: INFO: Waiting up to 5m0s for pod "downward-api-8baeb92c-4d89-11ea-b4b9-0242ac110005" in namespace "e2e-tests-downward-api-dldlk" to be "success or failure" Feb 12 11:19:33.825: INFO: Pod "downward-api-8baeb92c-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.202443ms Feb 12 11:19:38.357: INFO: Pod "downward-api-8baeb92c-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.548371303s Feb 12 11:19:40.410: INFO: Pod "downward-api-8baeb92c-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.601608309s Feb 12 11:19:42.420: INFO: Pod "downward-api-8baeb92c-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.611581336s Feb 12 11:19:44.437: INFO: Pod "downward-api-8baeb92c-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.62923188s Feb 12 11:19:46.462: INFO: Pod "downward-api-8baeb92c-4d89-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.653841002s STEP: Saw pod success Feb 12 11:19:46.462: INFO: Pod "downward-api-8baeb92c-4d89-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:19:46.479: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-8baeb92c-4d89-11ea-b4b9-0242ac110005 container dapi-container: STEP: delete the pod Feb 12 11:19:46.701: INFO: Waiting for pod downward-api-8baeb92c-4d89-11ea-b4b9-0242ac110005 to disappear Feb 12 11:19:46.842: INFO: Pod downward-api-8baeb92c-4d89-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:19:46.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-dldlk" for this suite. Feb 12 11:19:54.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:19:54.793: INFO: namespace: e2e-tests-downward-api-dldlk, resource: bindings, ignored listing per whitelist Feb 12 11:19:54.871: INFO: namespace e2e-tests-downward-api-dldlk deletion completed in 8.01530684s • [SLOW TEST:21.413 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:19:54.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:20:01.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-zntkk" for this suite. Feb 12 11:20:07.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:20:08.048: INFO: namespace: e2e-tests-namespaces-zntkk, resource: bindings, ignored listing per whitelist Feb 12 11:20:08.110: INFO: namespace e2e-tests-namespaces-zntkk deletion completed in 6.225962014s STEP: Destroying namespace "e2e-tests-nsdeletetest-8qp8x" for this suite. Feb 12 11:20:08.114: INFO: Namespace e2e-tests-nsdeletetest-8qp8x was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-q7rp2" for this suite. Feb 12 11:20:14.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:20:14.258: INFO: namespace: e2e-tests-nsdeletetest-q7rp2, resource: bindings, ignored listing per whitelist Feb 12 11:20:14.333: INFO: namespace e2e-tests-nsdeletetest-q7rp2 deletion completed in 6.218749072s • [SLOW TEST:19.461 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:20:14.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 12 11:20:27.195: INFO: Successfully updated pod "pod-update-a3fc5911-4d89-11ea-b4b9-0242ac110005" STEP: verifying the updated pod is in kubernetes Feb 12 11:20:27.273: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:20:27.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wrgzb" for this suite. Feb 12 11:20:51.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:20:51.342: INFO: namespace: e2e-tests-pods-wrgzb, resource: bindings, ignored listing per whitelist Feb 12 11:20:51.747: INFO: namespace e2e-tests-pods-wrgzb deletion completed in 24.470183046s • [SLOW TEST:37.414 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:20:51.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 12 11:20:51.956: INFO: Waiting up to 5m0s for pod "pod-ba49a414-4d89-11ea-b4b9-0242ac110005" in namespace "e2e-tests-emptydir-9tc44" to be "success or failure" Feb 12 11:20:51.980: INFO: Pod "pod-ba49a414-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.151664ms Feb 12 11:20:53.996: INFO: Pod "pod-ba49a414-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039669936s Feb 12 11:20:56.013: INFO: Pod "pod-ba49a414-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056784252s Feb 12 11:20:58.190: INFO: Pod "pod-ba49a414-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.233592501s Feb 12 11:21:00.216: INFO: Pod "pod-ba49a414-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.259545363s Feb 12 11:21:03.069: INFO: Pod "pod-ba49a414-4d89-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.113029457s STEP: Saw pod success Feb 12 11:21:03.069: INFO: Pod "pod-ba49a414-4d89-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:21:03.086: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ba49a414-4d89-11ea-b4b9-0242ac110005 container test-container: STEP: delete the pod Feb 12 11:21:03.774: INFO: Waiting for pod pod-ba49a414-4d89-11ea-b4b9-0242ac110005 to disappear Feb 12 11:21:04.053: INFO: Pod pod-ba49a414-4d89-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:21:04.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9tc44" for this suite. Feb 12 11:21:10.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:21:10.412: INFO: namespace: e2e-tests-emptydir-9tc44, resource: bindings, ignored listing per whitelist Feb 12 11:21:10.716: INFO: namespace e2e-tests-emptydir-9tc44 deletion completed in 6.623506584s • [SLOW TEST:18.969 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:21:10.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 12 11:21:10.953: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c59c0b9a-4d89-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-n9bcv" to be "success or failure" Feb 12 11:21:10.967: INFO: Pod "downwardapi-volume-c59c0b9a-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.60872ms Feb 12 11:21:13.120: INFO: Pod "downwardapi-volume-c59c0b9a-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166123247s Feb 12 11:21:15.163: INFO: Pod "downwardapi-volume-c59c0b9a-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209305872s Feb 12 11:21:17.183: INFO: Pod "downwardapi-volume-c59c0b9a-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.22927032s Feb 12 11:21:19.194: INFO: Pod "downwardapi-volume-c59c0b9a-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.240164968s Feb 12 11:21:21.652: INFO: Pod "downwardapi-volume-c59c0b9a-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.698357454s Feb 12 11:21:23.760: INFO: Pod "downwardapi-volume-c59c0b9a-4d89-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.805984966s STEP: Saw pod success Feb 12 11:21:23.760: INFO: Pod "downwardapi-volume-c59c0b9a-4d89-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:21:23.767: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c59c0b9a-4d89-11ea-b4b9-0242ac110005 container client-container: STEP: delete the pod Feb 12 11:21:24.444: INFO: Waiting for pod downwardapi-volume-c59c0b9a-4d89-11ea-b4b9-0242ac110005 to disappear Feb 12 11:21:24.476: INFO: Pod downwardapi-volume-c59c0b9a-4d89-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:21:24.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-n9bcv" for this suite. Feb 12 11:21:30.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:21:30.823: INFO: namespace: e2e-tests-projected-n9bcv, resource: bindings, ignored listing per whitelist Feb 12 11:21:30.863: INFO: namespace e2e-tests-projected-n9bcv deletion completed in 6.301585711s • [SLOW TEST:20.147 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:21:30.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 12 11:21:31.141: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1a2599f-4d89-11ea-b4b9-0242ac110005" in namespace "e2e-tests-downward-api-5mwfx" to be "success or failure" Feb 12 11:21:31.183: INFO: Pod "downwardapi-volume-d1a2599f-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.399052ms Feb 12 11:21:33.198: INFO: Pod "downwardapi-volume-d1a2599f-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057241615s Feb 12 11:21:35.211: INFO: Pod "downwardapi-volume-d1a2599f-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069851418s Feb 12 11:21:37.727: INFO: Pod "downwardapi-volume-d1a2599f-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.586097926s Feb 12 11:21:39.753: INFO: Pod "downwardapi-volume-d1a2599f-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.612173535s Feb 12 11:21:41.775: INFO: Pod "downwardapi-volume-d1a2599f-4d89-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.634376054s STEP: Saw pod success Feb 12 11:21:41.775: INFO: Pod "downwardapi-volume-d1a2599f-4d89-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:21:41.785: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d1a2599f-4d89-11ea-b4b9-0242ac110005 container client-container: STEP: delete the pod Feb 12 11:21:41.914: INFO: Waiting for pod downwardapi-volume-d1a2599f-4d89-11ea-b4b9-0242ac110005 to disappear Feb 12 11:21:42.022: INFO: Pod downwardapi-volume-d1a2599f-4d89-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:21:42.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5mwfx" for this suite. Feb 12 11:21:50.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:21:50.330: INFO: namespace: e2e-tests-downward-api-5mwfx, resource: bindings, ignored listing per whitelist Feb 12 11:21:50.404: INFO: namespace e2e-tests-downward-api-5mwfx deletion completed in 8.354357544s • [SLOW TEST:19.539 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:21:50.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-dd4bb9ac-4d89-11ea-b4b9-0242ac110005 STEP: Creating secret with name secret-projected-all-test-volume-dd4bb94e-4d89-11ea-b4b9-0242ac110005 STEP: Creating a pod to test Check all projections for projected volume plugin Feb 12 11:21:50.818: INFO: Waiting up to 5m0s for pod "projected-volume-dd4bb8f5-4d89-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-gbpzb" to be "success or failure" Feb 12 11:21:50.830: INFO: Pod "projected-volume-dd4bb8f5-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.212418ms Feb 12 11:21:53.005: INFO: Pod "projected-volume-dd4bb8f5-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186603627s Feb 12 11:21:55.029: INFO: Pod "projected-volume-dd4bb8f5-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210397524s Feb 12 11:21:57.520: INFO: Pod "projected-volume-dd4bb8f5-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.701686269s Feb 12 11:21:59.665: INFO: Pod "projected-volume-dd4bb8f5-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.846283529s Feb 12 11:22:01.703: INFO: Pod "projected-volume-dd4bb8f5-4d89-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.884950317s Feb 12 11:22:03.853: INFO: Pod "projected-volume-dd4bb8f5-4d89-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.035200054s STEP: Saw pod success Feb 12 11:22:03.854: INFO: Pod "projected-volume-dd4bb8f5-4d89-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:22:03.884: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-dd4bb8f5-4d89-11ea-b4b9-0242ac110005 container projected-all-volume-test: STEP: delete the pod Feb 12 11:22:04.383: INFO: Waiting for pod projected-volume-dd4bb8f5-4d89-11ea-b4b9-0242ac110005 to disappear Feb 12 11:22:04.409: INFO: Pod projected-volume-dd4bb8f5-4d89-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:22:04.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gbpzb" for this suite. Feb 12 11:22:10.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:22:10.743: INFO: namespace: e2e-tests-projected-gbpzb, resource: bindings, ignored listing per whitelist Feb 12 11:22:10.771: INFO: namespace e2e-tests-projected-gbpzb deletion completed in 6.348062601s • [SLOW TEST:20.367 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:22:10.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-gnmlq STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-gnmlq STEP: Deleting pre-stop pod Feb 12 11:22:38.067: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:22:38.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-gnmlq" for this suite. Feb 12 11:23:18.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:23:18.358: INFO: namespace: e2e-tests-prestop-gnmlq, resource: bindings, ignored listing per whitelist Feb 12 11:23:18.367: INFO: namespace e2e-tests-prestop-gnmlq deletion completed in 40.261153723s • [SLOW TEST:67.595 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:23:18.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0212 11:23:49.434079 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 12 11:23:49.434: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:23:49.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-b8kvw" for this suite. Feb 12 11:24:01.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:24:01.544: INFO: namespace: e2e-tests-gc-b8kvw, resource: bindings, ignored listing per whitelist Feb 12 11:24:02.772: INFO: namespace e2e-tests-gc-b8kvw deletion completed in 13.331638727s • [SLOW TEST:44.405 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:24:02.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 12 11:24:03.925: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 12 11:24:03.973: INFO: Number of nodes with available pods: 0 Feb 12 11:24:03.973: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 12 11:24:04.260: INFO: Number of nodes with available pods: 0 Feb 12 11:24:04.260: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:05.278: INFO: Number of nodes with available pods: 0 Feb 12 11:24:05.278: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:06.273: INFO: Number of nodes with available pods: 0 Feb 12 11:24:06.273: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:07.283: INFO: Number of nodes with available pods: 0 Feb 12 11:24:07.283: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:08.295: INFO: Number of nodes with available pods: 0 Feb 12 11:24:08.295: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:10.073: INFO: Number of nodes with available pods: 0 Feb 12 11:24:10.073: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:10.274: INFO: Number of nodes with available pods: 0 Feb 12 11:24:10.274: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:11.272: INFO: Number of nodes with available pods: 0 Feb 12 11:24:11.273: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:12.273: INFO: Number of nodes with available pods: 0 Feb 12 11:24:12.273: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:13.273: INFO: Number of nodes with available pods: 1 Feb 12 11:24:13.273: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 12 11:24:13.357: INFO: Number of nodes with available pods: 1 Feb 12 11:24:13.357: INFO: Number of running nodes: 0, number of available pods: 1 Feb 12 11:24:14.373: INFO: Number of nodes with available pods: 0 Feb 12 11:24:14.374: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 12 11:24:14.412: INFO: Number of nodes with available pods: 0 Feb 12 11:24:14.412: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:15.446: INFO: Number of nodes with available pods: 0 Feb 12 11:24:15.446: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:16.454: INFO: Number of nodes with available pods: 0 Feb 12 11:24:16.454: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:17.428: INFO: Number of nodes with available pods: 0 Feb 12 11:24:17.428: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:18.443: INFO: Number of nodes with available pods: 0 Feb 12 11:24:18.444: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:19.426: INFO: Number of nodes with available pods: 0 Feb 12 11:24:19.426: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:20.431: INFO: Number of nodes with available pods: 0 Feb 12 11:24:20.431: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:21.443: INFO: Number of nodes with available pods: 0 Feb 12 11:24:21.443: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:22.430: INFO: Number of nodes with available pods: 0 Feb 12 11:24:22.430: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:23.448: INFO: Number of nodes with available pods: 0 Feb 12 11:24:23.448: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:24.438: INFO: Number of nodes with available pods: 0 Feb 12 11:24:24.439: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:25.422: INFO: Number of nodes with available pods: 0 Feb 12 11:24:25.422: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:26.435: INFO: Number of nodes with available pods: 0 Feb 12 11:24:26.435: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:28.015: INFO: Number of nodes with available pods: 0 Feb 12 11:24:28.015: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:28.458: INFO: Number of nodes with available pods: 0 Feb 12 11:24:28.458: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:29.481: INFO: Number of nodes with available pods: 0 Feb 12 11:24:29.481: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:30.530: INFO: Number of nodes with available pods: 0 Feb 12 11:24:30.530: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 12 11:24:31.431: INFO: Number of nodes with available pods: 1 Feb 12 11:24:31.431: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-9cvp5, will wait for the garbage collector to delete the pods Feb 12 11:24:31.535: INFO: Deleting DaemonSet.extensions daemon-set took: 30.262244ms Feb 12 11:24:31.636: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.344064ms Feb 12 11:24:42.570: INFO: Number of nodes with available pods: 0 Feb 12 11:24:42.570: INFO: Number of running nodes: 0, number of available pods: 0 Feb 12 11:24:42.582: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-9cvp5/daemonsets","resourceVersion":"21413160"},"items":null} Feb 12 11:24:42.669: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-9cvp5/pods","resourceVersion":"21413160"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:24:42.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-9cvp5" for this suite. Feb 12 11:24:48.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:24:48.955: INFO: namespace: e2e-tests-daemonsets-9cvp5, resource: bindings, ignored listing per whitelist Feb 12 11:24:49.065: INFO: namespace e2e-tests-daemonsets-9cvp5 deletion completed in 6.316059089s • [SLOW TEST:46.293 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:24:49.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 12 11:25:00.102: INFO: Successfully updated pod "annotationupdate47d34272-4d8a-11ea-b4b9-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:25:02.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vtmkv" for this suite. Feb 12 11:25:26.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:25:26.433: INFO: namespace: e2e-tests-downward-api-vtmkv, resource: bindings, ignored listing per whitelist Feb 12 11:25:26.590: INFO: namespace e2e-tests-downward-api-vtmkv deletion completed in 24.334332324s • [SLOW TEST:37.524 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:25:26.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Feb 12 11:25:26.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wxhtd' Feb 12 11:25:29.130: INFO: stderr: "" Feb 12 11:25:29.130: INFO: stdout: "pod/pause created\n" Feb 12 11:25:29.130: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 12 11:25:29.130: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-wxhtd" to be "running and ready" Feb 12 11:25:29.256: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 126.09581ms Feb 12 11:25:31.280: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150417581s Feb 12 11:25:33.305: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174739069s Feb 12 11:25:35.678: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.548534204s Feb 12 11:25:37.692: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.562070725s Feb 12 11:25:39.708: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.578068054s Feb 12 11:25:39.708: INFO: Pod "pause" satisfied condition "running and ready" Feb 12 11:25:39.708: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Feb 12 11:25:39.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-wxhtd' Feb 12 11:25:39.991: INFO: stderr: "" Feb 12 11:25:39.991: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 12 11:25:39.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-wxhtd' Feb 12 11:25:40.150: INFO: stderr: "" Feb 12 11:25:40.150: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 12 11:25:40.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-wxhtd' Feb 12 11:25:40.268: INFO: stderr: "" Feb 12 11:25:40.268: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 12 11:25:40.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-wxhtd' Feb 12 11:25:40.398: INFO: stderr: "" Feb 12 11:25:40.398: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Feb 12 11:25:40.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wxhtd' Feb 12 11:25:40.768: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 12 11:25:40.768: INFO: stdout: "pod \"pause\" force deleted\n" Feb 12 11:25:40.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-wxhtd' Feb 12 11:25:41.108: INFO: stderr: "No resources found.\n" Feb 12 11:25:41.108: INFO: stdout: "" Feb 12 11:25:41.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-wxhtd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 12 11:25:41.215: INFO: stderr: "" Feb 12 11:25:41.215: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:25:41.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wxhtd" for this suite. Feb 12 11:25:47.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:25:47.404: INFO: namespace: e2e-tests-kubectl-wxhtd, resource: bindings, ignored listing per whitelist Feb 12 11:25:47.470: INFO: namespace e2e-tests-kubectl-wxhtd deletion completed in 6.241951067s • [SLOW TEST:20.878 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:25:47.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-6aa21a54-4d8a-11ea-b4b9-0242ac110005 STEP: Creating a pod to test consume secrets Feb 12 11:25:47.916: INFO: Waiting up to 5m0s for pod "pod-secrets-6aa4213e-4d8a-11ea-b4b9-0242ac110005" in namespace "e2e-tests-secrets-2wdlh" to be "success or failure" Feb 12 11:25:47.950: INFO: Pod "pod-secrets-6aa4213e-4d8a-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.386994ms Feb 12 11:25:49.964: INFO: Pod "pod-secrets-6aa4213e-4d8a-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047598184s Feb 12 11:25:52.032: INFO: Pod "pod-secrets-6aa4213e-4d8a-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115708356s Feb 12 11:25:54.045: INFO: Pod "pod-secrets-6aa4213e-4d8a-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128596342s Feb 12 11:25:56.058: INFO: Pod "pod-secrets-6aa4213e-4d8a-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142083953s Feb 12 11:25:58.072: INFO: Pod "pod-secrets-6aa4213e-4d8a-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.155694701s STEP: Saw pod success Feb 12 11:25:58.072: INFO: Pod "pod-secrets-6aa4213e-4d8a-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:25:58.087: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-6aa4213e-4d8a-11ea-b4b9-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 12 11:25:58.818: INFO: Waiting for pod pod-secrets-6aa4213e-4d8a-11ea-b4b9-0242ac110005 to disappear Feb 12 11:25:58.834: INFO: Pod pod-secrets-6aa4213e-4d8a-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:25:58.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-2wdlh" for this suite. Feb 12 11:26:04.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:26:05.025: INFO: namespace: e2e-tests-secrets-2wdlh, resource: bindings, ignored listing per whitelist Feb 12 11:26:05.155: INFO: namespace e2e-tests-secrets-2wdlh deletion completed in 6.31161973s • [SLOW TEST:17.685 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:26:05.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Feb 12 11:26:05.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:05.965: INFO: stderr: "" Feb 12 11:26:05.965: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 12 11:26:05.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:06.281: INFO: stderr: "" Feb 12 11:26:06.281: INFO: stdout: "update-demo-nautilus-2nqv4 update-demo-nautilus-jmrfc " Feb 12 11:26:06.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2nqv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:06.429: INFO: stderr: "" Feb 12 11:26:06.429: INFO: stdout: "" Feb 12 11:26:06.429: INFO: update-demo-nautilus-2nqv4 is created but not running Feb 12 11:26:11.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:11.583: INFO: stderr: "" Feb 12 11:26:11.583: INFO: stdout: "update-demo-nautilus-2nqv4 update-demo-nautilus-jmrfc " Feb 12 11:26:11.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2nqv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:11.710: INFO: stderr: "" Feb 12 11:26:11.710: INFO: stdout: "" Feb 12 11:26:11.710: INFO: update-demo-nautilus-2nqv4 is created but not running Feb 12 11:26:16.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:16.840: INFO: stderr: "" Feb 12 11:26:16.840: INFO: stdout: "update-demo-nautilus-2nqv4 update-demo-nautilus-jmrfc " Feb 12 11:26:16.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2nqv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:16.980: INFO: stderr: "" Feb 12 11:26:16.980: INFO: stdout: "" Feb 12 11:26:16.980: INFO: update-demo-nautilus-2nqv4 is created but not running Feb 12 11:26:21.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:22.173: INFO: stderr: "" Feb 12 11:26:22.173: INFO: stdout: "update-demo-nautilus-2nqv4 update-demo-nautilus-jmrfc " Feb 12 11:26:22.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2nqv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:22.388: INFO: stderr: "" Feb 12 11:26:22.388: INFO: stdout: "true" Feb 12 11:26:22.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2nqv4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:22.522: INFO: stderr: "" Feb 12 11:26:22.522: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 12 11:26:22.522: INFO: validating pod update-demo-nautilus-2nqv4 Feb 12 11:26:22.548: INFO: got data: { "image": "nautilus.jpg" } Feb 12 11:26:22.548: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 12 11:26:22.548: INFO: update-demo-nautilus-2nqv4 is verified up and running Feb 12 11:26:22.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jmrfc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:22.688: INFO: stderr: "" Feb 12 11:26:22.688: INFO: stdout: "true" Feb 12 11:26:22.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jmrfc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:22.840: INFO: stderr: "" Feb 12 11:26:22.840: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 12 11:26:22.840: INFO: validating pod update-demo-nautilus-jmrfc Feb 12 11:26:22.856: INFO: got data: { "image": "nautilus.jpg" } Feb 12 11:26:22.857: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 12 11:26:22.857: INFO: update-demo-nautilus-jmrfc is verified up and running STEP: scaling down the replication controller Feb 12 11:26:22.861: INFO: scanned /root for discovery docs: Feb 12 11:26:22.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:24.162: INFO: stderr: "" Feb 12 11:26:24.163: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 12 11:26:24.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:24.374: INFO: stderr: "" Feb 12 11:26:24.374: INFO: stdout: "update-demo-nautilus-2nqv4 update-demo-nautilus-jmrfc " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 12 11:26:29.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:29.935: INFO: stderr: "" Feb 12 11:26:29.935: INFO: stdout: "update-demo-nautilus-2nqv4 update-demo-nautilus-jmrfc " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 12 11:26:34.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:35.216: INFO: stderr: "" Feb 12 11:26:35.216: INFO: stdout: "update-demo-nautilus-2nqv4 update-demo-nautilus-jmrfc " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 12 11:26:40.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:40.417: INFO: stderr: "" Feb 12 11:26:40.417: INFO: stdout: "update-demo-nautilus-2nqv4 update-demo-nautilus-jmrfc " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 12 11:26:45.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:45.659: INFO: stderr: "" Feb 12 11:26:45.659: INFO: stdout: "update-demo-nautilus-2nqv4 " Feb 12 11:26:45.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2nqv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:45.818: INFO: stderr: "" Feb 12 11:26:45.818: INFO: stdout: "true" Feb 12 11:26:45.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2nqv4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:45.978: INFO: stderr: "" Feb 12 11:26:45.978: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 12 11:26:45.978: INFO: validating pod update-demo-nautilus-2nqv4 Feb 12 11:26:45.987: INFO: got data: { "image": "nautilus.jpg" } Feb 12 11:26:45.987: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 12 11:26:45.987: INFO: update-demo-nautilus-2nqv4 is verified up and running STEP: scaling up the replication controller Feb 12 11:26:45.991: INFO: scanned /root for discovery docs: Feb 12 11:26:45.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:47.956: INFO: stderr: "" Feb 12 11:26:47.956: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 12 11:26:47.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:48.135: INFO: stderr: "" Feb 12 11:26:48.135: INFO: stdout: "update-demo-nautilus-2nqv4 update-demo-nautilus-4vsbf " Feb 12 11:26:48.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2nqv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:48.277: INFO: stderr: "" Feb 12 11:26:48.277: INFO: stdout: "true" Feb 12 11:26:48.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2nqv4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:48.476: INFO: stderr: "" Feb 12 11:26:48.476: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 12 11:26:48.476: INFO: validating pod update-demo-nautilus-2nqv4 Feb 12 11:26:48.496: INFO: got data: { "image": "nautilus.jpg" } Feb 12 11:26:48.496: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 12 11:26:48.496: INFO: update-demo-nautilus-2nqv4 is verified up and running Feb 12 11:26:48.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vsbf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:48.608: INFO: stderr: "" Feb 12 11:26:48.608: INFO: stdout: "" Feb 12 11:26:48.608: INFO: update-demo-nautilus-4vsbf is created but not running Feb 12 11:26:53.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:54.306: INFO: stderr: "" Feb 12 11:26:54.306: INFO: stdout: "update-demo-nautilus-2nqv4 update-demo-nautilus-4vsbf " Feb 12 11:26:54.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2nqv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:55.832: INFO: stderr: "" Feb 12 11:26:55.832: INFO: stdout: "true" Feb 12 11:26:55.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2nqv4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:56.072: INFO: stderr: "" Feb 12 11:26:56.072: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 12 11:26:56.072: INFO: validating pod update-demo-nautilus-2nqv4 Feb 12 11:26:56.170: INFO: got data: { "image": "nautilus.jpg" } Feb 12 11:26:56.170: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 12 11:26:56.170: INFO: update-demo-nautilus-2nqv4 is verified up and running Feb 12 11:26:56.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vsbf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:26:56.353: INFO: stderr: "" Feb 12 11:26:56.353: INFO: stdout: "" Feb 12 11:26:56.353: INFO: update-demo-nautilus-4vsbf is created but not running Feb 12 11:27:01.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:27:01.573: INFO: stderr: "" Feb 12 11:27:01.573: INFO: stdout: "update-demo-nautilus-2nqv4 update-demo-nautilus-4vsbf " Feb 12 11:27:01.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2nqv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:27:01.730: INFO: stderr: "" Feb 12 11:27:01.730: INFO: stdout: "true" Feb 12 11:27:01.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2nqv4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:27:01.855: INFO: stderr: "" Feb 12 11:27:01.855: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 12 11:27:01.855: INFO: validating pod update-demo-nautilus-2nqv4 Feb 12 11:27:01.865: INFO: got data: { "image": "nautilus.jpg" } Feb 12 11:27:01.865: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 12 11:27:01.865: INFO: update-demo-nautilus-2nqv4 is verified up and running Feb 12 11:27:01.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vsbf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:27:02.177: INFO: stderr: "" Feb 12 11:27:02.177: INFO: stdout: "true" Feb 12 11:27:02.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vsbf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:27:02.428: INFO: stderr: "" Feb 12 11:27:02.428: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 12 11:27:02.428: INFO: validating pod update-demo-nautilus-4vsbf Feb 12 11:27:02.448: INFO: got data: { "image": "nautilus.jpg" } Feb 12 11:27:02.448: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 12 11:27:02.448: INFO: update-demo-nautilus-4vsbf is verified up and running STEP: using delete to clean up resources Feb 12 11:27:02.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:27:02.615: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 12 11:27:02.615: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 12 11:27:02.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-sn4ln' Feb 12 11:27:03.066: INFO: stderr: "No resources found.\n" Feb 12 11:27:03.066: INFO: stdout: "" Feb 12 11:27:03.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-sn4ln -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 12 11:27:03.265: INFO: stderr: "" Feb 12 11:27:03.265: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:27:03.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sn4ln" for this suite. Feb 12 11:27:27.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:27:27.372: INFO: namespace: e2e-tests-kubectl-sn4ln, resource: bindings, ignored listing per whitelist Feb 12 11:27:27.521: INFO: namespace e2e-tests-kubectl-sn4ln deletion completed in 24.239896035s • [SLOW TEST:82.365 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:27:27.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Feb 12 11:27:27.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 12 11:27:27.886: INFO: stderr: "" Feb 12 11:27:27.886: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:27:27.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hq9h4" for this suite. Feb 12 11:27:33.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:27:34.117: INFO: namespace: e2e-tests-kubectl-hq9h4, resource: bindings, ignored listing per whitelist Feb 12 11:27:34.151: INFO: namespace e2e-tests-kubectl-hq9h4 deletion completed in 6.242019619s • [SLOW TEST:6.630 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:27:34.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-rhdj STEP: Creating a pod to test atomic-volume-subpath Feb 12 11:27:34.382: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rhdj" in namespace "e2e-tests-subpath-kv2j5" to be "success or failure" Feb 12 11:27:34.405: INFO: Pod "pod-subpath-test-configmap-rhdj": Phase="Pending", Reason="", readiness=false. Elapsed: 23.000368ms Feb 12 11:27:36.879: INFO: Pod "pod-subpath-test-configmap-rhdj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.496326742s Feb 12 11:27:38.903: INFO: Pod "pod-subpath-test-configmap-rhdj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.520837884s Feb 12 11:27:41.169: INFO: Pod "pod-subpath-test-configmap-rhdj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.786143968s Feb 12 11:27:43.179: INFO: Pod "pod-subpath-test-configmap-rhdj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.796280032s Feb 12 11:27:47.109: INFO: Pod "pod-subpath-test-configmap-rhdj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.726916283s Feb 12 11:27:49.240: INFO: Pod "pod-subpath-test-configmap-rhdj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.857872925s Feb 12 11:27:51.269: INFO: Pod "pod-subpath-test-configmap-rhdj": Phase="Pending", Reason="", readiness=false. Elapsed: 16.886150345s Feb 12 11:27:53.337: INFO: Pod "pod-subpath-test-configmap-rhdj": Phase="Running", Reason="", readiness=false. Elapsed: 18.954694525s Feb 12 11:27:55.358: INFO: Pod "pod-subpath-test-configmap-rhdj": Phase="Running", Reason="", readiness=false. Elapsed: 20.975333821s Feb 12 11:27:57.381: INFO: Pod "pod-subpath-test-configmap-rhdj": Phase="Running", Reason="", readiness=false. Elapsed: 22.998683898s Feb 12 11:27:59.404: INFO: Pod "pod-subpath-test-configmap-rhdj": Phase="Running", Reason="", readiness=false. Elapsed: 25.021415401s Feb 12 11:28:01.420: INFO: Pod "pod-subpath-test-configmap-rhdj": Phase="Running", Reason="", readiness=false. Elapsed: 27.037525426s Feb 12 11:28:03.459: INFO: Pod "pod-subpath-test-configmap-rhdj": Phase="Running", Reason="", readiness=false. Elapsed: 29.076375231s Feb 12 11:28:05.478: INFO: Pod "pod-subpath-test-configmap-rhdj": Phase="Running", Reason="", readiness=false. Elapsed: 31.09550073s Feb 12 11:28:07.495: INFO: Pod "pod-subpath-test-configmap-rhdj": Phase="Running", Reason="", readiness=false. Elapsed: 33.112129313s Feb 12 11:28:09.515: INFO: Pod "pod-subpath-test-configmap-rhdj": Phase="Running", Reason="", readiness=false. Elapsed: 35.132212719s Feb 12 11:28:11.536: INFO: Pod "pod-subpath-test-configmap-rhdj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.153946799s STEP: Saw pod success Feb 12 11:28:11.536: INFO: Pod "pod-subpath-test-configmap-rhdj" satisfied condition "success or failure" Feb 12 11:28:11.555: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-rhdj container test-container-subpath-configmap-rhdj: STEP: delete the pod Feb 12 11:28:12.286: INFO: Waiting for pod pod-subpath-test-configmap-rhdj to disappear Feb 12 11:28:12.731: INFO: Pod pod-subpath-test-configmap-rhdj no longer exists STEP: Deleting pod pod-subpath-test-configmap-rhdj Feb 12 11:28:12.732: INFO: Deleting pod "pod-subpath-test-configmap-rhdj" in namespace "e2e-tests-subpath-kv2j5" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:28:12.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-kv2j5" for this suite. Feb 12 11:28:18.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:28:19.146: INFO: namespace: e2e-tests-subpath-kv2j5, resource: bindings, ignored listing per whitelist Feb 12 11:28:19.175: INFO: namespace e2e-tests-subpath-kv2j5 deletion completed in 6.403314226s • [SLOW TEST:45.023 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:28:19.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-zn6g2 in namespace e2e-tests-proxy-bqt7g I0212 11:28:19.590283 8 runners.go:184] Created replication controller with name: proxy-service-zn6g2, namespace: e2e-tests-proxy-bqt7g, replica count: 1 I0212 11:28:20.641470 8 runners.go:184] proxy-service-zn6g2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 11:28:21.641947 8 runners.go:184] proxy-service-zn6g2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 11:28:22.642439 8 runners.go:184] proxy-service-zn6g2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 11:28:23.643025 8 runners.go:184] proxy-service-zn6g2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 11:28:24.643445 8 runners.go:184] proxy-service-zn6g2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 11:28:25.643929 8 runners.go:184] proxy-service-zn6g2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 11:28:26.645143 8 runners.go:184] proxy-service-zn6g2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 11:28:27.645986 8 runners.go:184] proxy-service-zn6g2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 11:28:28.646378 8 runners.go:184] proxy-service-zn6g2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 11:28:29.646859 8 runners.go:184] proxy-service-zn6g2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0212 11:28:30.647628 8 runners.go:184] proxy-service-zn6g2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0212 11:28:31.648268 8 runners.go:184] proxy-service-zn6g2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0212 11:28:32.648911 8 runners.go:184] proxy-service-zn6g2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0212 11:28:33.649383 8 runners.go:184] proxy-service-zn6g2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0212 11:28:34.650000 8 runners.go:184] proxy-service-zn6g2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0212 11:28:35.650748 8 runners.go:184] proxy-service-zn6g2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0212 11:28:36.651292 8 runners.go:184] proxy-service-zn6g2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0212 11:28:37.651826 8 runners.go:184] proxy-service-zn6g2 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 12 11:28:37.681: INFO: Endpoint e2e-tests-proxy-bqt7g/proxy-service-zn6g2 is not ready yet Feb 12 11:28:39.700: INFO: setup took 20.320276625s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 12 11:28:39.734: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-bqt7g/pods/proxy-service-zn6g2-jf2ht/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-gqcd STEP: Creating a pod to test atomic-volume-subpath Feb 12 11:28:55.553: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-gqcd" in namespace "e2e-tests-subpath-rxsdc" to be "success or failure" Feb 12 11:28:55.580: INFO: Pod "pod-subpath-test-downwardapi-gqcd": Phase="Pending", Reason="", readiness=false. Elapsed: 26.691001ms Feb 12 11:28:57.665: INFO: Pod "pod-subpath-test-downwardapi-gqcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111467251s Feb 12 11:28:59.694: INFO: Pod "pod-subpath-test-downwardapi-gqcd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14052945s Feb 12 11:29:01.792: INFO: Pod "pod-subpath-test-downwardapi-gqcd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.238750323s Feb 12 11:29:03.808: INFO: Pod "pod-subpath-test-downwardapi-gqcd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.254512042s Feb 12 11:29:05.825: INFO: Pod "pod-subpath-test-downwardapi-gqcd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.271683583s Feb 12 11:29:07.841: INFO: Pod "pod-subpath-test-downwardapi-gqcd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.287578259s Feb 12 11:29:09.857: INFO: Pod "pod-subpath-test-downwardapi-gqcd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.303706032s Feb 12 11:29:12.339: INFO: Pod "pod-subpath-test-downwardapi-gqcd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.785632401s Feb 12 11:29:14.383: INFO: Pod "pod-subpath-test-downwardapi-gqcd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.829673721s Feb 12 11:29:16.407: INFO: Pod "pod-subpath-test-downwardapi-gqcd": Phase="Running", Reason="", readiness=false. Elapsed: 20.853373278s Feb 12 11:29:18.422: INFO: Pod "pod-subpath-test-downwardapi-gqcd": Phase="Running", Reason="", readiness=false. Elapsed: 22.868970963s Feb 12 11:29:20.440: INFO: Pod "pod-subpath-test-downwardapi-gqcd": Phase="Running", Reason="", readiness=false. Elapsed: 24.886389494s Feb 12 11:29:22.460: INFO: Pod "pod-subpath-test-downwardapi-gqcd": Phase="Running", Reason="", readiness=false. Elapsed: 26.907099179s Feb 12 11:29:24.499: INFO: Pod "pod-subpath-test-downwardapi-gqcd": Phase="Running", Reason="", readiness=false. Elapsed: 28.945300555s Feb 12 11:29:26.540: INFO: Pod "pod-subpath-test-downwardapi-gqcd": Phase="Running", Reason="", readiness=false. Elapsed: 30.986976695s Feb 12 11:29:28.581: INFO: Pod "pod-subpath-test-downwardapi-gqcd": Phase="Running", Reason="", readiness=false. Elapsed: 33.028134966s Feb 12 11:29:30.633: INFO: Pod "pod-subpath-test-downwardapi-gqcd": Phase="Running", Reason="", readiness=false. Elapsed: 35.07937249s Feb 12 11:29:32.647: INFO: Pod "pod-subpath-test-downwardapi-gqcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.093754301s STEP: Saw pod success Feb 12 11:29:32.647: INFO: Pod "pod-subpath-test-downwardapi-gqcd" satisfied condition "success or failure" Feb 12 11:29:32.651: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-gqcd container test-container-subpath-downwardapi-gqcd: STEP: delete the pod Feb 12 11:29:33.401: INFO: Waiting for pod pod-subpath-test-downwardapi-gqcd to disappear Feb 12 11:29:33.750: INFO: Pod pod-subpath-test-downwardapi-gqcd no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-gqcd Feb 12 11:29:33.750: INFO: Deleting pod "pod-subpath-test-downwardapi-gqcd" in namespace "e2e-tests-subpath-rxsdc" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:29:33.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-rxsdc" for this suite. Feb 12 11:29:40.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:29:40.188: INFO: namespace: e2e-tests-subpath-rxsdc, resource: bindings, ignored listing per whitelist Feb 12 11:29:40.238: INFO: namespace e2e-tests-subpath-rxsdc deletion completed in 6.444516181s • [SLOW TEST:44.923 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:29:40.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 12 11:29:40.740: INFO: PodSpec: initContainers in spec.initContainers Feb 12 11:30:59.814: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f57913b2-4d8a-11ea-b4b9-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-wzkh7", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-wzkh7/pods/pod-init-f57913b2-4d8a-11ea-b4b9-0242ac110005", UID:"f57a3d42-4d8a-11ea-a994-fa163e34d433", ResourceVersion:"21413938", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717103780, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"739992235"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-fwkvz", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001ef8140), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fwkvz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fwkvz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fwkvz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002534088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002728120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002534140)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002534160)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002534168), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00253416c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717103781, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717103781, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717103781, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717103780, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc00229c040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001bdc1c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001bdc230)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://d2c175e26b2584b9ffb75780a827277fed253308e42587a6da97b8e3a69bbbc5"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00229c080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00229c060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:30:59.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-wzkh7" for this suite. Feb 12 11:31:23.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:31:24.154: INFO: namespace: e2e-tests-init-container-wzkh7, resource: bindings, ignored listing per whitelist Feb 12 11:31:24.172: INFO: namespace e2e-tests-init-container-wzkh7 deletion completed in 24.311448978s • [SLOW TEST:103.934 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:31:24.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Feb 12 11:31:24.327: INFO: Waiting up to 5m0s for pod "pod-3335abdb-4d8b-11ea-b4b9-0242ac110005" in namespace "e2e-tests-emptydir-fntkh" to be "success or failure" Feb 12 11:31:24.403: INFO: Pod "pod-3335abdb-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 76.033014ms Feb 12 11:31:26.540: INFO: Pod "pod-3335abdb-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213123564s Feb 12 11:31:28.579: INFO: Pod "pod-3335abdb-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.25228958s Feb 12 11:31:30.759: INFO: Pod "pod-3335abdb-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.43165994s Feb 12 11:31:32.778: INFO: Pod "pod-3335abdb-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.450793094s Feb 12 11:31:34.870: INFO: Pod "pod-3335abdb-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.542432249s Feb 12 11:31:36.885: INFO: Pod "pod-3335abdb-4d8b-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.557608944s STEP: Saw pod success Feb 12 11:31:36.885: INFO: Pod "pod-3335abdb-4d8b-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:31:36.891: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3335abdb-4d8b-11ea-b4b9-0242ac110005 container test-container: STEP: delete the pod Feb 12 11:31:37.416: INFO: Waiting for pod pod-3335abdb-4d8b-11ea-b4b9-0242ac110005 to disappear Feb 12 11:31:37.605: INFO: Pod pod-3335abdb-4d8b-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:31:37.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-fntkh" for this suite. Feb 12 11:31:45.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:31:45.809: INFO: namespace: e2e-tests-emptydir-fntkh, resource: bindings, ignored listing per whitelist Feb 12 11:31:45.854: INFO: namespace e2e-tests-emptydir-fntkh deletion completed in 8.237734052s • [SLOW TEST:21.681 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:31:45.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4ggk9 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-4ggk9;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4ggk9 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-4ggk9;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4ggk9.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-4ggk9.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4ggk9.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-4ggk9.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4ggk9.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4ggk9.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4ggk9.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4ggk9.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4ggk9.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-4ggk9.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4ggk9.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-4ggk9.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4ggk9.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 238.108.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.108.238_udp@PTR;check="$$(dig +tcp +noall +answer +search 238.108.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.108.238_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4ggk9 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-4ggk9;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4ggk9 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-4ggk9;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-4ggk9.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-4ggk9.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-4ggk9.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-4ggk9.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4ggk9.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4ggk9.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-4ggk9.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4ggk9.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4ggk9.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-4ggk9.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-4ggk9.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-4ggk9.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4ggk9.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 238.108.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.108.238_udp@PTR;check="$$(dig +tcp +noall +answer +search 238.108.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.108.238_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 12 11:32:06.383: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4ggk9 from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.398: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-4ggk9.svc from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.415: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-4ggk9.svc from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.425: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4ggk9.svc from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.435: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4ggk9.svc from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.452: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-4ggk9.svc from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.469: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-4ggk9.svc from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.484: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.501: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.511: INFO: Unable to read 10.97.108.238_udp@PTR from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.523: INFO: Unable to read 10.97.108.238_tcp@PTR from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.549: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.609: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.657: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4ggk9 from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.667: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4ggk9 from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.683: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-4ggk9.svc from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.694: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-4ggk9.svc from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.712: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4ggk9.svc from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.716: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4ggk9.svc from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.720: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-4ggk9.svc from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.725: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-4ggk9.svc from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.732: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.737: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.741: INFO: Unable to read 10.97.108.238_udp@PTR from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.745: INFO: Unable to read 10.97.108.238_tcp@PTR from pod e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-40379050-4d8b-11ea-b4b9-0242ac110005) Feb 12 11:32:06.745: INFO: Lookups using e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005 failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-4ggk9 wheezy_udp@dns-test-service.e2e-tests-dns-4ggk9.svc wheezy_tcp@dns-test-service.e2e-tests-dns-4ggk9.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-4ggk9.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4ggk9.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-4ggk9.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-4ggk9.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.97.108.238_udp@PTR 10.97.108.238_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-4ggk9 jessie_tcp@dns-test-service.e2e-tests-dns-4ggk9 jessie_udp@dns-test-service.e2e-tests-dns-4ggk9.svc jessie_tcp@dns-test-service.e2e-tests-dns-4ggk9.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-4ggk9.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-4ggk9.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-4ggk9.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-4ggk9.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.97.108.238_udp@PTR 10.97.108.238_tcp@PTR] Feb 12 11:32:12.070: INFO: DNS probes using e2e-tests-dns-4ggk9/dns-test-40379050-4d8b-11ea-b4b9-0242ac110005 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:32:12.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-4ggk9" for this suite. Feb 12 11:32:19.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:32:19.836: INFO: namespace: e2e-tests-dns-4ggk9, resource: bindings, ignored listing per whitelist Feb 12 11:32:19.951: INFO: namespace e2e-tests-dns-4ggk9 deletion completed in 7.027973997s • [SLOW TEST:34.097 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:32:19.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-5474ddad-4d8b-11ea-b4b9-0242ac110005 STEP: Creating a pod to test consume secrets Feb 12 11:32:20.108: INFO: Waiting up to 5m0s for pod "pod-secrets-547592f1-4d8b-11ea-b4b9-0242ac110005" in namespace "e2e-tests-secrets-tcgm7" to be "success or failure" Feb 12 11:32:20.251: INFO: Pod "pod-secrets-547592f1-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 142.595322ms Feb 12 11:32:22.269: INFO: Pod "pod-secrets-547592f1-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160233418s Feb 12 11:32:24.285: INFO: Pod "pod-secrets-547592f1-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177000214s Feb 12 11:32:26.412: INFO: Pod "pod-secrets-547592f1-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.303708928s Feb 12 11:32:28.424: INFO: Pod "pod-secrets-547592f1-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.315594789s Feb 12 11:32:30.474: INFO: Pod "pod-secrets-547592f1-4d8b-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.365203799s STEP: Saw pod success Feb 12 11:32:30.474: INFO: Pod "pod-secrets-547592f1-4d8b-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:32:30.500: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-547592f1-4d8b-11ea-b4b9-0242ac110005 container secret-volume-test: STEP: delete the pod Feb 12 11:32:30.678: INFO: Waiting for pod pod-secrets-547592f1-4d8b-11ea-b4b9-0242ac110005 to disappear Feb 12 11:32:30.689: INFO: Pod pod-secrets-547592f1-4d8b-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:32:30.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tcgm7" for this suite. Feb 12 11:32:36.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:32:37.193: INFO: namespace: e2e-tests-secrets-tcgm7, resource: bindings, ignored listing per whitelist Feb 12 11:32:37.203: INFO: namespace e2e-tests-secrets-tcgm7 deletion completed in 6.503756088s • [SLOW TEST:17.252 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:32:37.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 12 11:32:37.355: INFO: Waiting up to 5m0s for pod "downward-api-5ebd2585-4d8b-11ea-b4b9-0242ac110005" in namespace "e2e-tests-downward-api-j7dr8" to be "success or failure" Feb 12 11:32:37.370: INFO: Pod "downward-api-5ebd2585-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.362328ms Feb 12 11:32:39.968: INFO: Pod "downward-api-5ebd2585-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.612653512s Feb 12 11:32:42.034: INFO: Pod "downward-api-5ebd2585-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.678722452s Feb 12 11:32:44.359: INFO: Pod "downward-api-5ebd2585-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.003614691s Feb 12 11:32:46.373: INFO: Pod "downward-api-5ebd2585-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.017529293s Feb 12 11:32:48.388: INFO: Pod "downward-api-5ebd2585-4d8b-11ea-b4b9-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 11.032949209s Feb 12 11:32:50.412: INFO: Pod "downward-api-5ebd2585-4d8b-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.056646223s STEP: Saw pod success Feb 12 11:32:50.412: INFO: Pod "downward-api-5ebd2585-4d8b-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:32:50.417: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-5ebd2585-4d8b-11ea-b4b9-0242ac110005 container dapi-container: STEP: delete the pod Feb 12 11:32:51.015: INFO: Waiting for pod downward-api-5ebd2585-4d8b-11ea-b4b9-0242ac110005 to disappear Feb 12 11:32:51.048: INFO: Pod downward-api-5ebd2585-4d8b-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:32:51.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-j7dr8" for this suite. Feb 12 11:32:57.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:32:57.220: INFO: namespace: e2e-tests-downward-api-j7dr8, resource: bindings, ignored listing per whitelist Feb 12 11:32:57.284: INFO: namespace e2e-tests-downward-api-j7dr8 deletion completed in 6.229012755s • [SLOW TEST:20.080 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:32:57.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 12 11:32:57.558: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ac7796c-4d8b-11ea-b4b9-0242ac110005" in namespace "e2e-tests-downward-api-wx2kz" to be "success or failure" Feb 12 11:32:57.590: INFO: Pod "downwardapi-volume-6ac7796c-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.314372ms Feb 12 11:32:59.892: INFO: Pod "downwardapi-volume-6ac7796c-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.333851837s Feb 12 11:33:01.914: INFO: Pod "downwardapi-volume-6ac7796c-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.355741453s Feb 12 11:33:03.998: INFO: Pod "downwardapi-volume-6ac7796c-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440397146s Feb 12 11:33:06.018: INFO: Pod "downwardapi-volume-6ac7796c-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.460393454s Feb 12 11:33:08.053: INFO: Pod "downwardapi-volume-6ac7796c-4d8b-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.494951123s STEP: Saw pod success Feb 12 11:33:08.053: INFO: Pod "downwardapi-volume-6ac7796c-4d8b-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:33:08.085: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6ac7796c-4d8b-11ea-b4b9-0242ac110005 container client-container: STEP: delete the pod Feb 12 11:33:08.261: INFO: Waiting for pod downwardapi-volume-6ac7796c-4d8b-11ea-b4b9-0242ac110005 to disappear Feb 12 11:33:08.301: INFO: Pod downwardapi-volume-6ac7796c-4d8b-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:33:08.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wx2kz" for this suite. Feb 12 11:33:14.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:33:14.548: INFO: namespace: e2e-tests-downward-api-wx2kz, resource: bindings, ignored listing per whitelist Feb 12 11:33:14.769: INFO: namespace e2e-tests-downward-api-wx2kz deletion completed in 6.451932282s • [SLOW TEST:17.485 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:33:14.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 12 11:33:15.310: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"7547211c-4d8b-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0023ac5ca), BlockOwnerDeletion:(*bool)(0xc0023ac5cb)}} Feb 12 11:33:15.430: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"7539dbe3-4d8b-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0023ac7d2), BlockOwnerDeletion:(*bool)(0xc0023ac7d3)}} Feb 12 11:33:15.455: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"753d2018-4d8b-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0023ac96a), BlockOwnerDeletion:(*bool)(0xc0023ac96b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:33:20.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-tgzl9" for this suite. Feb 12 11:33:26.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:33:27.036: INFO: namespace: e2e-tests-gc-tgzl9, resource: bindings, ignored listing per whitelist Feb 12 11:33:27.080: INFO: namespace e2e-tests-gc-tgzl9 deletion completed in 6.324744401s • [SLOW TEST:12.309 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:33:27.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 12 11:33:27.368: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c8b5c4f-4d8b-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-hl8ps" to be "success or failure" Feb 12 11:33:27.383: INFO: Pod "downwardapi-volume-7c8b5c4f-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.335612ms Feb 12 11:33:29.404: INFO: Pod "downwardapi-volume-7c8b5c4f-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035931492s Feb 12 11:33:31.416: INFO: Pod "downwardapi-volume-7c8b5c4f-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048200243s Feb 12 11:33:33.794: INFO: Pod "downwardapi-volume-7c8b5c4f-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.42655466s Feb 12 11:33:35.814: INFO: Pod "downwardapi-volume-7c8b5c4f-4d8b-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.446164922s Feb 12 11:33:37.894: INFO: Pod "downwardapi-volume-7c8b5c4f-4d8b-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.526453471s STEP: Saw pod success Feb 12 11:33:37.894: INFO: Pod "downwardapi-volume-7c8b5c4f-4d8b-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:33:37.910: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7c8b5c4f-4d8b-11ea-b4b9-0242ac110005 container client-container: STEP: delete the pod Feb 12 11:33:38.353: INFO: Waiting for pod downwardapi-volume-7c8b5c4f-4d8b-11ea-b4b9-0242ac110005 to disappear Feb 12 11:33:38.370: INFO: Pod downwardapi-volume-7c8b5c4f-4d8b-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:33:38.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hl8ps" for this suite. Feb 12 11:33:44.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:33:44.802: INFO: namespace: e2e-tests-projected-hl8ps, resource: bindings, ignored listing per whitelist Feb 12 11:33:44.811: INFO: namespace e2e-tests-projected-hl8ps deletion completed in 6.432199287s • [SLOW TEST:17.731 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:33:44.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-wx45t Feb 12 11:33:57.084: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-wx45t STEP: checking the pod's current state and verifying that restartCount is present Feb 12 11:33:57.094: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:37:58.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-wx45t" for this suite. Feb 12 11:38:06.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:38:07.054: INFO: namespace: e2e-tests-container-probe-wx45t, resource: bindings, ignored listing per whitelist Feb 12 11:38:07.143: INFO: namespace e2e-tests-container-probe-wx45t deletion completed in 8.301833729s • [SLOW TEST:262.332 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:38:07.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 12 11:38:35.792: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x5hpx PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 11:38:35.792: INFO: >>> kubeConfig: /root/.kube/config I0212 11:38:35.906973 8 log.go:172] (0xc0000ff130) (0xc002626820) Create stream I0212 11:38:35.907086 8 log.go:172] (0xc0000ff130) (0xc002626820) Stream added, broadcasting: 1 I0212 11:38:35.915217 8 log.go:172] (0xc0000ff130) Reply frame received for 1 I0212 11:38:35.915331 8 log.go:172] (0xc0000ff130) (0xc0011363c0) Create stream I0212 11:38:35.915354 8 log.go:172] (0xc0000ff130) (0xc0011363c0) Stream added, broadcasting: 3 I0212 11:38:35.917696 8 log.go:172] (0xc0000ff130) Reply frame received for 3 I0212 11:38:35.917740 8 log.go:172] (0xc0000ff130) (0xc001136460) Create stream I0212 11:38:35.917753 8 log.go:172] (0xc0000ff130) (0xc001136460) Stream added, broadcasting: 5 I0212 11:38:35.919548 8 log.go:172] (0xc0000ff130) Reply frame received for 5 I0212 11:38:36.118127 8 log.go:172] (0xc0000ff130) Data frame received for 3 I0212 11:38:36.118270 8 log.go:172] (0xc0011363c0) (3) Data frame handling I0212 11:38:36.118359 8 log.go:172] (0xc0011363c0) (3) Data frame sent I0212 11:38:36.318974 8 log.go:172] (0xc0000ff130) (0xc0011363c0) Stream removed, broadcasting: 3 I0212 11:38:36.319194 8 log.go:172] (0xc0000ff130) Data frame received for 1 I0212 11:38:36.319218 8 log.go:172] (0xc002626820) (1) Data frame handling I0212 11:38:36.319240 8 log.go:172] (0xc002626820) (1) Data frame sent I0212 11:38:36.319246 8 log.go:172] (0xc0000ff130) (0xc002626820) Stream removed, broadcasting: 1 I0212 11:38:36.319602 8 log.go:172] (0xc0000ff130) (0xc001136460) Stream removed, broadcasting: 5 I0212 11:38:36.319685 8 log.go:172] (0xc0000ff130) (0xc002626820) Stream removed, broadcasting: 1 I0212 11:38:36.319697 8 log.go:172] (0xc0000ff130) (0xc0011363c0) Stream removed, broadcasting: 3 I0212 11:38:36.319702 8 log.go:172] (0xc0000ff130) (0xc001136460) Stream removed, broadcasting: 5 Feb 12 11:38:36.319: INFO: Exec stderr: "" Feb 12 11:38:36.319: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x5hpx PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 11:38:36.319: INFO: >>> kubeConfig: /root/.kube/config I0212 11:38:36.324113 8 log.go:172] (0xc0000ff130) Go away received I0212 11:38:36.413161 8 log.go:172] (0xc00124e580) (0xc001136640) Create stream I0212 11:38:36.413291 8 log.go:172] (0xc00124e580) (0xc001136640) Stream added, broadcasting: 1 I0212 11:38:36.436150 8 log.go:172] (0xc00124e580) Reply frame received for 1 I0212 11:38:36.436238 8 log.go:172] (0xc00124e580) (0xc0026268c0) Create stream I0212 11:38:36.436260 8 log.go:172] (0xc00124e580) (0xc0026268c0) Stream added, broadcasting: 3 I0212 11:38:36.437567 8 log.go:172] (0xc00124e580) Reply frame received for 3 I0212 11:38:36.437608 8 log.go:172] (0xc00124e580) (0xc001c66be0) Create stream I0212 11:38:36.437621 8 log.go:172] (0xc00124e580) (0xc001c66be0) Stream added, broadcasting: 5 I0212 11:38:36.444318 8 log.go:172] (0xc00124e580) Reply frame received for 5 I0212 11:38:36.794385 8 log.go:172] (0xc00124e580) Data frame received for 3 I0212 11:38:36.794499 8 log.go:172] (0xc0026268c0) (3) Data frame handling I0212 11:38:36.794530 8 log.go:172] (0xc0026268c0) (3) Data frame sent I0212 11:38:36.954979 8 log.go:172] (0xc00124e580) (0xc001c66be0) Stream removed, broadcasting: 5 I0212 11:38:36.955240 8 log.go:172] (0xc00124e580) Data frame received for 1 I0212 11:38:36.955290 8 log.go:172] (0xc00124e580) (0xc0026268c0) Stream removed, broadcasting: 3 I0212 11:38:36.955383 8 log.go:172] (0xc001136640) (1) Data frame handling I0212 11:38:36.955409 8 log.go:172] (0xc001136640) (1) Data frame sent I0212 11:38:36.955421 8 log.go:172] (0xc00124e580) (0xc001136640) Stream removed, broadcasting: 1 I0212 11:38:36.955443 8 log.go:172] (0xc00124e580) Go away received I0212 11:38:36.956175 8 log.go:172] (0xc00124e580) (0xc001136640) Stream removed, broadcasting: 1 I0212 11:38:36.956192 8 log.go:172] (0xc00124e580) (0xc0026268c0) Stream removed, broadcasting: 3 I0212 11:38:36.956218 8 log.go:172] (0xc00124e580) (0xc001c66be0) Stream removed, broadcasting: 5 Feb 12 11:38:36.956: INFO: Exec stderr: "" Feb 12 11:38:36.956: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x5hpx PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 11:38:36.956: INFO: >>> kubeConfig: /root/.kube/config I0212 11:38:37.058883 8 log.go:172] (0xc00124ea50) (0xc001136d20) Create stream I0212 11:38:37.059190 8 log.go:172] (0xc00124ea50) (0xc001136d20) Stream added, broadcasting: 1 I0212 11:38:37.080722 8 log.go:172] (0xc00124ea50) Reply frame received for 1 I0212 11:38:37.080905 8 log.go:172] (0xc00124ea50) (0xc002626960) Create stream I0212 11:38:37.080942 8 log.go:172] (0xc00124ea50) (0xc002626960) Stream added, broadcasting: 3 I0212 11:38:37.083030 8 log.go:172] (0xc00124ea50) Reply frame received for 3 I0212 11:38:37.083069 8 log.go:172] (0xc00124ea50) (0xc001c66c80) Create stream I0212 11:38:37.083079 8 log.go:172] (0xc00124ea50) (0xc001c66c80) Stream added, broadcasting: 5 I0212 11:38:37.086126 8 log.go:172] (0xc00124ea50) Reply frame received for 5 I0212 11:38:37.260304 8 log.go:172] (0xc00124ea50) Data frame received for 3 I0212 11:38:37.260416 8 log.go:172] (0xc002626960) (3) Data frame handling I0212 11:38:37.260464 8 log.go:172] (0xc002626960) (3) Data frame sent I0212 11:38:37.416687 8 log.go:172] (0xc00124ea50) (0xc002626960) Stream removed, broadcasting: 3 I0212 11:38:37.416886 8 log.go:172] (0xc00124ea50) Data frame received for 1 I0212 11:38:37.416943 8 log.go:172] (0xc00124ea50) (0xc001c66c80) Stream removed, broadcasting: 5 I0212 11:38:37.417057 8 log.go:172] (0xc001136d20) (1) Data frame handling I0212 11:38:37.417103 8 log.go:172] (0xc001136d20) (1) Data frame sent I0212 11:38:37.417118 8 log.go:172] (0xc00124ea50) (0xc001136d20) Stream removed, broadcasting: 1 I0212 11:38:37.417141 8 log.go:172] (0xc00124ea50) Go away received I0212 11:38:37.417563 8 log.go:172] (0xc00124ea50) (0xc001136d20) Stream removed, broadcasting: 1 I0212 11:38:37.417595 8 log.go:172] (0xc00124ea50) (0xc002626960) Stream removed, broadcasting: 3 I0212 11:38:37.417609 8 log.go:172] (0xc00124ea50) (0xc001c66c80) Stream removed, broadcasting: 5 Feb 12 11:38:37.417: INFO: Exec stderr: "" Feb 12 11:38:37.417: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x5hpx PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 11:38:37.418: INFO: >>> kubeConfig: /root/.kube/config I0212 11:38:37.507571 8 log.go:172] (0xc000d9c4d0) (0xc001c66f00) Create stream I0212 11:38:37.507691 8 log.go:172] (0xc000d9c4d0) (0xc001c66f00) Stream added, broadcasting: 1 I0212 11:38:37.515980 8 log.go:172] (0xc000d9c4d0) Reply frame received for 1 I0212 11:38:37.516033 8 log.go:172] (0xc000d9c4d0) (0xc002626a00) Create stream I0212 11:38:37.516045 8 log.go:172] (0xc000d9c4d0) (0xc002626a00) Stream added, broadcasting: 3 I0212 11:38:37.518855 8 log.go:172] (0xc000d9c4d0) Reply frame received for 3 I0212 11:38:37.518883 8 log.go:172] (0xc000d9c4d0) (0xc0024e9720) Create stream I0212 11:38:37.518895 8 log.go:172] (0xc000d9c4d0) (0xc0024e9720) Stream added, broadcasting: 5 I0212 11:38:37.520231 8 log.go:172] (0xc000d9c4d0) Reply frame received for 5 I0212 11:38:37.657310 8 log.go:172] (0xc000d9c4d0) Data frame received for 3 I0212 11:38:37.657366 8 log.go:172] (0xc002626a00) (3) Data frame handling I0212 11:38:37.657384 8 log.go:172] (0xc002626a00) (3) Data frame sent I0212 11:38:37.772409 8 log.go:172] (0xc000d9c4d0) Data frame received for 1 I0212 11:38:37.772490 8 log.go:172] (0xc001c66f00) (1) Data frame handling I0212 11:38:37.772571 8 log.go:172] (0xc001c66f00) (1) Data frame sent I0212 11:38:37.772773 8 log.go:172] (0xc000d9c4d0) (0xc001c66f00) Stream removed, broadcasting: 1 I0212 11:38:37.773431 8 log.go:172] (0xc000d9c4d0) (0xc002626a00) Stream removed, broadcasting: 3 I0212 11:38:37.773494 8 log.go:172] (0xc000d9c4d0) (0xc0024e9720) Stream removed, broadcasting: 5 I0212 11:38:37.773559 8 log.go:172] (0xc000d9c4d0) (0xc001c66f00) Stream removed, broadcasting: 1 I0212 11:38:37.773569 8 log.go:172] (0xc000d9c4d0) (0xc002626a00) Stream removed, broadcasting: 3 I0212 11:38:37.773575 8 log.go:172] (0xc000d9c4d0) (0xc0024e9720) Stream removed, broadcasting: 5 Feb 12 11:38:37.773: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 12 11:38:37.773: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x5hpx PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 11:38:37.773: INFO: >>> kubeConfig: /root/.kube/config I0212 11:38:37.774083 8 log.go:172] (0xc000d9c4d0) Go away received I0212 11:38:37.983099 8 log.go:172] (0xc00259c2c0) (0xc0024e9a40) Create stream I0212 11:38:37.983363 8 log.go:172] (0xc00259c2c0) (0xc0024e9a40) Stream added, broadcasting: 1 I0212 11:38:38.031912 8 log.go:172] (0xc00259c2c0) Reply frame received for 1 I0212 11:38:38.032125 8 log.go:172] (0xc00259c2c0) (0xc001c670e0) Create stream I0212 11:38:38.032158 8 log.go:172] (0xc00259c2c0) (0xc001c670e0) Stream added, broadcasting: 3 I0212 11:38:38.043232 8 log.go:172] (0xc00259c2c0) Reply frame received for 3 I0212 11:38:38.043360 8 log.go:172] (0xc00259c2c0) (0xc002626b40) Create stream I0212 11:38:38.043401 8 log.go:172] (0xc00259c2c0) (0xc002626b40) Stream added, broadcasting: 5 I0212 11:38:38.059308 8 log.go:172] (0xc00259c2c0) Reply frame received for 5 I0212 11:38:38.267518 8 log.go:172] (0xc00259c2c0) Data frame received for 3 I0212 11:38:38.267639 8 log.go:172] (0xc001c670e0) (3) Data frame handling I0212 11:38:38.267699 8 log.go:172] (0xc001c670e0) (3) Data frame sent I0212 11:38:38.405587 8 log.go:172] (0xc00259c2c0) (0xc001c670e0) Stream removed, broadcasting: 3 I0212 11:38:38.405765 8 log.go:172] (0xc00259c2c0) Data frame received for 1 I0212 11:38:38.405834 8 log.go:172] (0xc0024e9a40) (1) Data frame handling I0212 11:38:38.405889 8 log.go:172] (0xc0024e9a40) (1) Data frame sent I0212 11:38:38.405957 8 log.go:172] (0xc00259c2c0) (0xc002626b40) Stream removed, broadcasting: 5 I0212 11:38:38.406185 8 log.go:172] (0xc00259c2c0) (0xc0024e9a40) Stream removed, broadcasting: 1 I0212 11:38:38.406233 8 log.go:172] (0xc00259c2c0) Go away received I0212 11:38:38.406527 8 log.go:172] (0xc00259c2c0) (0xc0024e9a40) Stream removed, broadcasting: 1 I0212 11:38:38.406571 8 log.go:172] (0xc00259c2c0) (0xc001c670e0) Stream removed, broadcasting: 3 I0212 11:38:38.406594 8 log.go:172] (0xc00259c2c0) (0xc002626b40) Stream removed, broadcasting: 5 Feb 12 11:38:38.406: INFO: Exec stderr: "" Feb 12 11:38:38.406: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x5hpx PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 11:38:38.406: INFO: >>> kubeConfig: /root/.kube/config I0212 11:38:38.507551 8 log.go:172] (0xc00259c790) (0xc0024e9ea0) Create stream I0212 11:38:38.508437 8 log.go:172] (0xc00259c790) (0xc0024e9ea0) Stream added, broadcasting: 1 I0212 11:38:38.520628 8 log.go:172] (0xc00259c790) Reply frame received for 1 I0212 11:38:38.520724 8 log.go:172] (0xc00259c790) (0xc001136dc0) Create stream I0212 11:38:38.520802 8 log.go:172] (0xc00259c790) (0xc001136dc0) Stream added, broadcasting: 3 I0212 11:38:38.522670 8 log.go:172] (0xc00259c790) Reply frame received for 3 I0212 11:38:38.522711 8 log.go:172] (0xc00259c790) (0xc001c67180) Create stream I0212 11:38:38.522727 8 log.go:172] (0xc00259c790) (0xc001c67180) Stream added, broadcasting: 5 I0212 11:38:38.524419 8 log.go:172] (0xc00259c790) Reply frame received for 5 I0212 11:38:38.764736 8 log.go:172] (0xc00259c790) Data frame received for 3 I0212 11:38:38.764820 8 log.go:172] (0xc001136dc0) (3) Data frame handling I0212 11:38:38.764871 8 log.go:172] (0xc001136dc0) (3) Data frame sent I0212 11:38:38.898231 8 log.go:172] (0xc00259c790) Data frame received for 1 I0212 11:38:38.898354 8 log.go:172] (0xc0024e9ea0) (1) Data frame handling I0212 11:38:38.898370 8 log.go:172] (0xc0024e9ea0) (1) Data frame sent I0212 11:38:38.898939 8 log.go:172] (0xc00259c790) (0xc0024e9ea0) Stream removed, broadcasting: 1 I0212 11:38:38.899299 8 log.go:172] (0xc00259c790) (0xc001c67180) Stream removed, broadcasting: 5 I0212 11:38:38.899372 8 log.go:172] (0xc00259c790) (0xc001136dc0) Stream removed, broadcasting: 3 I0212 11:38:38.899439 8 log.go:172] (0xc00259c790) (0xc0024e9ea0) Stream removed, broadcasting: 1 I0212 11:38:38.899462 8 log.go:172] (0xc00259c790) (0xc001136dc0) Stream removed, broadcasting: 3 I0212 11:38:38.899471 8 log.go:172] (0xc00259c790) (0xc001c67180) Stream removed, broadcasting: 5 I0212 11:38:38.899668 8 log.go:172] (0xc00259c790) Go away received Feb 12 11:38:38.899: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 12 11:38:38.899: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x5hpx PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 11:38:38.900: INFO: >>> kubeConfig: /root/.kube/config I0212 11:38:38.987613 8 log.go:172] (0xc00259ca50) (0xc0009ae280) Create stream I0212 11:38:38.987747 8 log.go:172] (0xc00259ca50) (0xc0009ae280) Stream added, broadcasting: 1 I0212 11:38:38.995758 8 log.go:172] (0xc00259ca50) Reply frame received for 1 I0212 11:38:38.995833 8 log.go:172] (0xc00259ca50) (0xc001c67220) Create stream I0212 11:38:38.995852 8 log.go:172] (0xc00259ca50) (0xc001c67220) Stream added, broadcasting: 3 I0212 11:38:38.997037 8 log.go:172] (0xc00259ca50) Reply frame received for 3 I0212 11:38:38.997124 8 log.go:172] (0xc00259ca50) (0xc001136e60) Create stream I0212 11:38:38.997137 8 log.go:172] (0xc00259ca50) (0xc001136e60) Stream added, broadcasting: 5 I0212 11:38:38.998429 8 log.go:172] (0xc00259ca50) Reply frame received for 5 I0212 11:38:39.150664 8 log.go:172] (0xc00259ca50) Data frame received for 3 I0212 11:38:39.150774 8 log.go:172] (0xc001c67220) (3) Data frame handling I0212 11:38:39.150790 8 log.go:172] (0xc001c67220) (3) Data frame sent I0212 11:38:39.320830 8 log.go:172] (0xc00259ca50) Data frame received for 1 I0212 11:38:39.320933 8 log.go:172] (0xc0009ae280) (1) Data frame handling I0212 11:38:39.321004 8 log.go:172] (0xc0009ae280) (1) Data frame sent I0212 11:38:39.328218 8 log.go:172] (0xc00259ca50) (0xc0009ae280) Stream removed, broadcasting: 1 I0212 11:38:39.328349 8 log.go:172] (0xc00259ca50) (0xc001c67220) Stream removed, broadcasting: 3 I0212 11:38:39.328435 8 log.go:172] (0xc00259ca50) (0xc001136e60) Stream removed, broadcasting: 5 I0212 11:38:39.328594 8 log.go:172] (0xc00259ca50) (0xc0009ae280) Stream removed, broadcasting: 1 I0212 11:38:39.328616 8 log.go:172] (0xc00259ca50) (0xc001c67220) Stream removed, broadcasting: 3 I0212 11:38:39.328659 8 log.go:172] (0xc00259ca50) (0xc001136e60) Stream removed, broadcasting: 5 I0212 11:38:39.329553 8 log.go:172] (0xc00259ca50) Go away received Feb 12 11:38:39.330: INFO: Exec stderr: "" Feb 12 11:38:39.330: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x5hpx PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 11:38:39.330: INFO: >>> kubeConfig: /root/.kube/config I0212 11:38:39.401924 8 log.go:172] (0xc00124f130) (0xc001137040) Create stream I0212 11:38:39.402101 8 log.go:172] (0xc00124f130) (0xc001137040) Stream added, broadcasting: 1 I0212 11:38:39.470609 8 log.go:172] (0xc00124f130) Reply frame received for 1 I0212 11:38:39.470738 8 log.go:172] (0xc00124f130) (0xc002626be0) Create stream I0212 11:38:39.470753 8 log.go:172] (0xc00124f130) (0xc002626be0) Stream added, broadcasting: 3 I0212 11:38:39.471577 8 log.go:172] (0xc00124f130) Reply frame received for 3 I0212 11:38:39.471605 8 log.go:172] (0xc00124f130) (0xc002626c80) Create stream I0212 11:38:39.471612 8 log.go:172] (0xc00124f130) (0xc002626c80) Stream added, broadcasting: 5 I0212 11:38:39.474346 8 log.go:172] (0xc00124f130) Reply frame received for 5 I0212 11:38:39.618348 8 log.go:172] (0xc00124f130) Data frame received for 3 I0212 11:38:39.618478 8 log.go:172] (0xc002626be0) (3) Data frame handling I0212 11:38:39.618593 8 log.go:172] (0xc002626be0) (3) Data frame sent I0212 11:38:39.729857 8 log.go:172] (0xc00124f130) Data frame received for 1 I0212 11:38:39.729958 8 log.go:172] (0xc001137040) (1) Data frame handling I0212 11:38:39.729978 8 log.go:172] (0xc001137040) (1) Data frame sent I0212 11:38:39.730014 8 log.go:172] (0xc00124f130) (0xc001137040) Stream removed, broadcasting: 1 I0212 11:38:39.730079 8 log.go:172] (0xc00124f130) (0xc002626be0) Stream removed, broadcasting: 3 I0212 11:38:39.730405 8 log.go:172] (0xc00124f130) (0xc002626c80) Stream removed, broadcasting: 5 I0212 11:38:39.730453 8 log.go:172] (0xc00124f130) Go away received I0212 11:38:39.730477 8 log.go:172] (0xc00124f130) (0xc001137040) Stream removed, broadcasting: 1 I0212 11:38:39.730490 8 log.go:172] (0xc00124f130) (0xc002626be0) Stream removed, broadcasting: 3 I0212 11:38:39.730506 8 log.go:172] (0xc00124f130) (0xc002626c80) Stream removed, broadcasting: 5 Feb 12 11:38:39.730: INFO: Exec stderr: "" Feb 12 11:38:39.730: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x5hpx PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 11:38:39.730: INFO: >>> kubeConfig: /root/.kube/config I0212 11:38:39.802780 8 log.go:172] (0xc000d9c9a0) (0xc001c674a0) Create stream I0212 11:38:39.802932 8 log.go:172] (0xc000d9c9a0) (0xc001c674a0) Stream added, broadcasting: 1 I0212 11:38:39.809197 8 log.go:172] (0xc000d9c9a0) Reply frame received for 1 I0212 11:38:39.809236 8 log.go:172] (0xc000d9c9a0) (0xc0009ae500) Create stream I0212 11:38:39.809245 8 log.go:172] (0xc000d9c9a0) (0xc0009ae500) Stream added, broadcasting: 3 I0212 11:38:39.810181 8 log.go:172] (0xc000d9c9a0) Reply frame received for 3 I0212 11:38:39.810201 8 log.go:172] (0xc000d9c9a0) (0xc001c67540) Create stream I0212 11:38:39.810208 8 log.go:172] (0xc000d9c9a0) (0xc001c67540) Stream added, broadcasting: 5 I0212 11:38:39.811186 8 log.go:172] (0xc000d9c9a0) Reply frame received for 5 I0212 11:38:39.912746 8 log.go:172] (0xc000d9c9a0) Data frame received for 3 I0212 11:38:39.912828 8 log.go:172] (0xc0009ae500) (3) Data frame handling I0212 11:38:39.912855 8 log.go:172] (0xc0009ae500) (3) Data frame sent I0212 11:38:40.056457 8 log.go:172] (0xc000d9c9a0) Data frame received for 1 I0212 11:38:40.056590 8 log.go:172] (0xc000d9c9a0) (0xc0009ae500) Stream removed, broadcasting: 3 I0212 11:38:40.056652 8 log.go:172] (0xc001c674a0) (1) Data frame handling I0212 11:38:40.056686 8 log.go:172] (0xc001c674a0) (1) Data frame sent I0212 11:38:40.056756 8 log.go:172] (0xc000d9c9a0) (0xc001c67540) Stream removed, broadcasting: 5 I0212 11:38:40.056818 8 log.go:172] (0xc000d9c9a0) (0xc001c674a0) Stream removed, broadcasting: 1 I0212 11:38:40.056848 8 log.go:172] (0xc000d9c9a0) Go away received I0212 11:38:40.057444 8 log.go:172] (0xc000d9c9a0) (0xc001c674a0) Stream removed, broadcasting: 1 I0212 11:38:40.057477 8 log.go:172] (0xc000d9c9a0) (0xc0009ae500) Stream removed, broadcasting: 3 I0212 11:38:40.057491 8 log.go:172] (0xc000d9c9a0) (0xc001c67540) Stream removed, broadcasting: 5 Feb 12 11:38:40.057: INFO: Exec stderr: "" Feb 12 11:38:40.057: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-x5hpx PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 11:38:40.057: INFO: >>> kubeConfig: /root/.kube/config I0212 11:38:40.113997 8 log.go:172] (0xc0000ff600) (0xc0026270e0) Create stream I0212 11:38:40.114072 8 log.go:172] (0xc0000ff600) (0xc0026270e0) Stream added, broadcasting: 1 I0212 11:38:40.119538 8 log.go:172] (0xc0000ff600) Reply frame received for 1 I0212 11:38:40.119582 8 log.go:172] (0xc0000ff600) (0xc0011370e0) Create stream I0212 11:38:40.119593 8 log.go:172] (0xc0000ff600) (0xc0011370e0) Stream added, broadcasting: 3 I0212 11:38:40.120624 8 log.go:172] (0xc0000ff600) Reply frame received for 3 I0212 11:38:40.120663 8 log.go:172] (0xc0000ff600) (0xc0009ae640) Create stream I0212 11:38:40.120676 8 log.go:172] (0xc0000ff600) (0xc0009ae640) Stream added, broadcasting: 5 I0212 11:38:40.121717 8 log.go:172] (0xc0000ff600) Reply frame received for 5 I0212 11:38:40.260440 8 log.go:172] (0xc0000ff600) Data frame received for 3 I0212 11:38:40.260515 8 log.go:172] (0xc0011370e0) (3) Data frame handling I0212 11:38:40.260557 8 log.go:172] (0xc0011370e0) (3) Data frame sent I0212 11:38:40.385812 8 log.go:172] (0xc0000ff600) Data frame received for 1 I0212 11:38:40.385944 8 log.go:172] (0xc0026270e0) (1) Data frame handling I0212 11:38:40.386001 8 log.go:172] (0xc0026270e0) (1) Data frame sent I0212 11:38:40.386214 8 log.go:172] (0xc0000ff600) (0xc0009ae640) Stream removed, broadcasting: 5 I0212 11:38:40.386282 8 log.go:172] (0xc0000ff600) (0xc0026270e0) Stream removed, broadcasting: 1 I0212 11:38:40.386376 8 log.go:172] (0xc0000ff600) (0xc0011370e0) Stream removed, broadcasting: 3 I0212 11:38:40.386535 8 log.go:172] (0xc0000ff600) Go away received I0212 11:38:40.386798 8 log.go:172] (0xc0000ff600) (0xc0026270e0) Stream removed, broadcasting: 1 I0212 11:38:40.386812 8 log.go:172] (0xc0000ff600) (0xc0011370e0) Stream removed, broadcasting: 3 I0212 11:38:40.386828 8 log.go:172] (0xc0000ff600) (0xc0009ae640) Stream removed, broadcasting: 5 Feb 12 11:38:40.386: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:38:40.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-x5hpx" for this suite. Feb 12 11:39:36.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:39:36.658: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-x5hpx, resource: bindings, ignored listing per whitelist Feb 12 11:39:36.729: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-x5hpx deletion completed in 56.327544048s • [SLOW TEST:89.585 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:39:36.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 12 11:39:37.122: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58ead30e-4d8c-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-fnctd" to be "success or failure" Feb 12 11:39:37.141: INFO: Pod "downwardapi-volume-58ead30e-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.961248ms Feb 12 11:39:39.178: INFO: Pod "downwardapi-volume-58ead30e-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056011019s Feb 12 11:39:41.194: INFO: Pod "downwardapi-volume-58ead30e-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071858169s Feb 12 11:39:43.232: INFO: Pod "downwardapi-volume-58ead30e-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110264155s Feb 12 11:39:45.342: INFO: Pod "downwardapi-volume-58ead30e-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.219414679s Feb 12 11:39:47.355: INFO: Pod "downwardapi-volume-58ead30e-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.232872744s Feb 12 11:39:49.398: INFO: Pod "downwardapi-volume-58ead30e-4d8c-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.275524591s STEP: Saw pod success Feb 12 11:39:49.398: INFO: Pod "downwardapi-volume-58ead30e-4d8c-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:39:49.406: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-58ead30e-4d8c-11ea-b4b9-0242ac110005 container client-container: STEP: delete the pod Feb 12 11:39:49.493: INFO: Waiting for pod downwardapi-volume-58ead30e-4d8c-11ea-b4b9-0242ac110005 to disappear Feb 12 11:39:49.616: INFO: Pod downwardapi-volume-58ead30e-4d8c-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:39:49.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fnctd" for this suite. Feb 12 11:39:55.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:39:55.772: INFO: namespace: e2e-tests-projected-fnctd, resource: bindings, ignored listing per whitelist Feb 12 11:39:55.919: INFO: namespace e2e-tests-projected-fnctd deletion completed in 6.284945287s • [SLOW TEST:19.189 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:39:55.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Feb 12 11:39:56.273: INFO: Waiting up to 5m0s for pod "client-containers-644df033-4d8c-11ea-b4b9-0242ac110005" in namespace "e2e-tests-containers-9jnns" to be "success or failure" Feb 12 11:39:56.315: INFO: Pod "client-containers-644df033-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.425937ms Feb 12 11:39:58.359: INFO: Pod "client-containers-644df033-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086543315s Feb 12 11:40:00.381: INFO: Pod "client-containers-644df033-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108419078s Feb 12 11:40:02.904: INFO: Pod "client-containers-644df033-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.631520498s Feb 12 11:40:04.923: INFO: Pod "client-containers-644df033-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.649778549s Feb 12 11:40:06.946: INFO: Pod "client-containers-644df033-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.673167519s Feb 12 11:40:08.961: INFO: Pod "client-containers-644df033-4d8c-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.688629607s STEP: Saw pod success Feb 12 11:40:08.962: INFO: Pod "client-containers-644df033-4d8c-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:40:08.967: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-644df033-4d8c-11ea-b4b9-0242ac110005 container test-container: STEP: delete the pod Feb 12 11:40:09.032: INFO: Waiting for pod client-containers-644df033-4d8c-11ea-b4b9-0242ac110005 to disappear Feb 12 11:40:09.040: INFO: Pod client-containers-644df033-4d8c-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:40:09.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-9jnns" for this suite. Feb 12 11:40:15.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:40:15.250: INFO: namespace: e2e-tests-containers-9jnns, resource: bindings, ignored listing per whitelist Feb 12 11:40:15.423: INFO: namespace e2e-tests-containers-9jnns deletion completed in 6.376802113s • [SLOW TEST:19.504 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:40:15.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 12 11:40:15.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-fnsqq' Feb 12 11:40:19.418: INFO: stderr: "" Feb 12 11:40:19.419: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Feb 12 11:40:34.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-fnsqq -o json' Feb 12 11:40:34.651: INFO: stderr: "" Feb 12 11:40:34.651: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-02-12T11:40:19Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-fnsqq\",\n \"resourceVersion\": \"21414961\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-fnsqq/pods/e2e-test-nginx-pod\",\n \"uid\": \"7218e5bd-4d8c-11ea-a994-fa163e34d433\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-9ptk7\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-9ptk7\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-9ptk7\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-12T11:40:19Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-12T11:40:29Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-12T11:40:29Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-12T11:40:19Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://6e1c9cbad3bbe7a367d2eebe8b01e06f71da69bcdee0c9d0280136afc5149cef\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-02-12T11:40:28Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.1.240\",\n \"phase\": \"Running\",\n \"podIP\": \"10.32.0.4\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-02-12T11:40:19Z\"\n }\n}\n" STEP: replace the image in the pod Feb 12 11:40:34.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-fnsqq' Feb 12 11:40:35.205: INFO: stderr: "" Feb 12 11:40:35.205: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Feb 12 11:40:35.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-fnsqq' Feb 12 11:40:44.032: INFO: stderr: "" Feb 12 11:40:44.032: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:40:44.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fnsqq" for this suite. Feb 12 11:40:50.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:40:50.204: INFO: namespace: e2e-tests-kubectl-fnsqq, resource: bindings, ignored listing per whitelist Feb 12 11:40:50.315: INFO: namespace e2e-tests-kubectl-fnsqq deletion completed in 6.259264257s • [SLOW TEST:34.891 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:40:50.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-84ba866d-4d8c-11ea-b4b9-0242ac110005 STEP: Creating a pod to test consume configMaps Feb 12 11:40:50.688: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-84c5f057-4d8c-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-4zb5j" to be "success or failure" Feb 12 11:40:50.728: INFO: Pod "pod-projected-configmaps-84c5f057-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.230635ms Feb 12 11:40:52.741: INFO: Pod "pod-projected-configmaps-84c5f057-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052858752s Feb 12 11:40:54.753: INFO: Pod "pod-projected-configmaps-84c5f057-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064137957s Feb 12 11:40:56.786: INFO: Pod "pod-projected-configmaps-84c5f057-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097698585s Feb 12 11:40:58.841: INFO: Pod "pod-projected-configmaps-84c5f057-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.152331158s Feb 12 11:41:00.883: INFO: Pod "pod-projected-configmaps-84c5f057-4d8c-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.194721694s STEP: Saw pod success Feb 12 11:41:00.884: INFO: Pod "pod-projected-configmaps-84c5f057-4d8c-11ea-b4b9-0242ac110005" satisfied condition "success or failure" Feb 12 11:41:00.894: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-84c5f057-4d8c-11ea-b4b9-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Feb 12 11:41:01.010: INFO: Waiting for pod pod-projected-configmaps-84c5f057-4d8c-11ea-b4b9-0242ac110005 to disappear Feb 12 11:41:01.018: INFO: Pod pod-projected-configmaps-84c5f057-4d8c-11ea-b4b9-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 12 11:41:01.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4zb5j" for this suite. Feb 12 11:41:07.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 11:41:07.530: INFO: namespace: e2e-tests-projected-4zb5j, resource: bindings, ignored listing per whitelist Feb 12 11:41:07.545: INFO: namespace e2e-tests-projected-4zb5j deletion completed in 6.515488588s • [SLOW TEST:17.229 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 12 11:41:07.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 12 11:41:07.859: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 16.968712ms)
Feb 12 11:41:07.865: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.342667ms)
Feb 12 11:41:07.869: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.472086ms)
Feb 12 11:41:07.908: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 38.672386ms)
Feb 12 11:41:07.913: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.820453ms)
Feb 12 11:41:07.918: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.650864ms)
Feb 12 11:41:07.923: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.374044ms)
Feb 12 11:41:07.928: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.692916ms)
Feb 12 11:41:07.932: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.042979ms)
Feb 12 11:41:07.936: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.62069ms)
Feb 12 11:41:07.941: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.638874ms)
Feb 12 11:41:07.946: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.090889ms)
Feb 12 11:41:07.951: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.695241ms)
Feb 12 11:41:07.956: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.742144ms)
Feb 12 11:41:07.960: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.35737ms)
Feb 12 11:41:07.964: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.023022ms)
Feb 12 11:41:07.969: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.273697ms)
Feb 12 11:41:07.974: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.800565ms)
Feb 12 11:41:07.980: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.321481ms)
Feb 12 11:41:07.985: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.156858ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:41:07.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-59kr9" for this suite.
Feb 12 11:41:14.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 11:41:14.125: INFO: namespace: e2e-tests-proxy-59kr9, resource: bindings, ignored listing per whitelist
Feb 12 11:41:14.224: INFO: namespace e2e-tests-proxy-59kr9 deletion completed in 6.233591118s

• [SLOW TEST:6.678 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 11:41:14.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 12 11:41:14.471: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92e93cdc-4d8c-11ea-b4b9-0242ac110005" in namespace "e2e-tests-downward-api-wvqqr" to be "success or failure"
Feb 12 11:41:14.499: INFO: Pod "downwardapi-volume-92e93cdc-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.391118ms
Feb 12 11:41:16.572: INFO: Pod "downwardapi-volume-92e93cdc-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101334863s
Feb 12 11:41:18.586: INFO: Pod "downwardapi-volume-92e93cdc-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114753909s
Feb 12 11:41:20.623: INFO: Pod "downwardapi-volume-92e93cdc-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151601725s
Feb 12 11:41:22.639: INFO: Pod "downwardapi-volume-92e93cdc-4d8c-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.1683022s
Feb 12 11:41:24.650: INFO: Pod "downwardapi-volume-92e93cdc-4d8c-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.178453877s
STEP: Saw pod success
Feb 12 11:41:24.650: INFO: Pod "downwardapi-volume-92e93cdc-4d8c-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 11:41:24.652: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-92e93cdc-4d8c-11ea-b4b9-0242ac110005 container client-container: 
STEP: delete the pod
Feb 12 11:41:26.993: INFO: Waiting for pod downwardapi-volume-92e93cdc-4d8c-11ea-b4b9-0242ac110005 to disappear
Feb 12 11:41:27.016: INFO: Pod downwardapi-volume-92e93cdc-4d8c-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:41:27.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wvqqr" for this suite.
Feb 12 11:41:33.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 11:41:33.244: INFO: namespace: e2e-tests-downward-api-wvqqr, resource: bindings, ignored listing per whitelist
Feb 12 11:41:33.266: INFO: namespace e2e-tests-downward-api-wvqqr deletion completed in 6.241467386s

• [SLOW TEST:19.042 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 11:41:33.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 12 11:41:33.490: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 12 11:41:38.684: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 12 11:41:46.715: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 12 11:41:46.916: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-rzfxs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rzfxs/deployments/test-cleanup-deployment,UID:a631e598-4d8c-11ea-a994-fa163e34d433,ResourceVersion:21415148,Generation:1,CreationTimestamp:2020-02-12 11:41:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb 12 11:41:46.947: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Feb 12 11:41:46.947: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb 12 11:41:46.948: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-rzfxs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-rzfxs/replicasets/test-cleanup-controller,UID:9e4ad0c2-4d8c-11ea-a994-fa163e34d433,ResourceVersion:21415149,Generation:1,CreationTimestamp:2020-02-12 11:41:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment a631e598-4d8c-11ea-a994-fa163e34d433 0xc0024cadf7 0xc0024cadf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 12 11:41:47.015: INFO: Pod "test-cleanup-controller-wwtd9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-wwtd9,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-rzfxs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-rzfxs/pods/test-cleanup-controller-wwtd9,UID:9e5896bb-4d8c-11ea-a994-fa163e34d433,ResourceVersion:21415145,Generation:0,CreationTimestamp:2020-02-12 11:41:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 9e4ad0c2-4d8c-11ea-a994-fa163e34d433 0xc002397237 0xc002397238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-n8dmx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n8dmx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-n8dmx true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023972a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023972c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 11:41:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 11:41:45 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 11:41:33 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-12 11:41:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-12 11:41:44 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9391e52f93c9bd68882330a016ff86ff62d662f52ece99972c0ec698fe7efcd5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:41:47.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-rzfxs" for this suite.
Feb 12 11:42:01.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 11:42:01.408: INFO: namespace: e2e-tests-deployment-rzfxs, resource: bindings, ignored listing per whitelist
Feb 12 11:42:01.419: INFO: namespace e2e-tests-deployment-rzfxs deletion completed in 14.327714418s

• [SLOW TEST:28.152 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 11:42:01.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-pn7sb
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-pn7sb
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-pn7sb
Feb 12 11:42:01.701: INFO: Found 0 stateful pods, waiting for 1
Feb 12 11:42:11.719: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Feb 12 11:42:21.718: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 12 11:42:21.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 12 11:42:22.447: INFO: stderr: "I0212 11:42:21.983886    1683 log.go:172] (0xc000138580) (0xc0002a3360) Create stream\nI0212 11:42:21.984283    1683 log.go:172] (0xc000138580) (0xc0002a3360) Stream added, broadcasting: 1\nI0212 11:42:21.994102    1683 log.go:172] (0xc000138580) Reply frame received for 1\nI0212 11:42:21.994203    1683 log.go:172] (0xc000138580) (0xc0002a3400) Create stream\nI0212 11:42:21.994220    1683 log.go:172] (0xc000138580) (0xc0002a3400) Stream added, broadcasting: 3\nI0212 11:42:21.995598    1683 log.go:172] (0xc000138580) Reply frame received for 3\nI0212 11:42:21.995626    1683 log.go:172] (0xc000138580) (0xc0007b8000) Create stream\nI0212 11:42:21.995635    1683 log.go:172] (0xc000138580) (0xc0007b8000) Stream added, broadcasting: 5\nI0212 11:42:21.996813    1683 log.go:172] (0xc000138580) Reply frame received for 5\nI0212 11:42:22.202734    1683 log.go:172] (0xc000138580) Data frame received for 3\nI0212 11:42:22.202989    1683 log.go:172] (0xc0002a3400) (3) Data frame handling\nI0212 11:42:22.203051    1683 log.go:172] (0xc0002a3400) (3) Data frame sent\nI0212 11:42:22.429179    1683 log.go:172] (0xc000138580) (0xc0002a3400) Stream removed, broadcasting: 3\nI0212 11:42:22.429613    1683 log.go:172] (0xc000138580) Data frame received for 1\nI0212 11:42:22.429636    1683 log.go:172] (0xc0002a3360) (1) Data frame handling\nI0212 11:42:22.429662    1683 log.go:172] (0xc0002a3360) (1) Data frame sent\nI0212 11:42:22.429681    1683 log.go:172] (0xc000138580) (0xc0002a3360) Stream removed, broadcasting: 1\nI0212 11:42:22.430392    1683 log.go:172] (0xc000138580) (0xc0007b8000) Stream removed, broadcasting: 5\nI0212 11:42:22.430536    1683 log.go:172] (0xc000138580) (0xc0002a3360) Stream removed, broadcasting: 1\nI0212 11:42:22.430584    1683 log.go:172] (0xc000138580) (0xc0002a3400) Stream removed, broadcasting: 3\nI0212 11:42:22.430600    1683 log.go:172] (0xc000138580) (0xc0007b8000) Stream removed, broadcasting: 5\nI0212 11:42:22.430775    1683 log.go:172] (0xc000138580) Go away received\n"
Feb 12 11:42:22.447: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 12 11:42:22.447: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 12 11:42:22.471: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 12 11:42:22.471: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 11:42:22.581: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999997873s
Feb 12 11:42:23.609: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.928326138s
Feb 12 11:42:24.628: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.900148243s
Feb 12 11:42:25.657: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.881158973s
Feb 12 11:42:26.672: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.852373214s
Feb 12 11:42:27.749: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.836723882s
Feb 12 11:42:28.790: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.759956345s
Feb 12 11:42:29.807: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.71900435s
Feb 12 11:42:30.830: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.702406668s
Feb 12 11:42:31.856: INFO: Verifying statefulset ss doesn't scale past 1 for another 678.629389ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-pn7sb
Feb 12 11:42:32.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:42:33.541: INFO: stderr: "I0212 11:42:33.123180    1705 log.go:172] (0xc00015c6e0) (0xc000704780) Create stream\nI0212 11:42:33.123458    1705 log.go:172] (0xc00015c6e0) (0xc000704780) Stream added, broadcasting: 1\nI0212 11:42:33.131187    1705 log.go:172] (0xc00015c6e0) Reply frame received for 1\nI0212 11:42:33.131263    1705 log.go:172] (0xc00015c6e0) (0xc0006b45a0) Create stream\nI0212 11:42:33.131276    1705 log.go:172] (0xc00015c6e0) (0xc0006b45a0) Stream added, broadcasting: 3\nI0212 11:42:33.132527    1705 log.go:172] (0xc00015c6e0) Reply frame received for 3\nI0212 11:42:33.132554    1705 log.go:172] (0xc00015c6e0) (0xc000704820) Create stream\nI0212 11:42:33.132565    1705 log.go:172] (0xc00015c6e0) (0xc000704820) Stream added, broadcasting: 5\nI0212 11:42:33.133628    1705 log.go:172] (0xc00015c6e0) Reply frame received for 5\nI0212 11:42:33.273406    1705 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0212 11:42:33.273560    1705 log.go:172] (0xc0006b45a0) (3) Data frame handling\nI0212 11:42:33.273587    1705 log.go:172] (0xc0006b45a0) (3) Data frame sent\nI0212 11:42:33.524654    1705 log.go:172] (0xc00015c6e0) Data frame received for 1\nI0212 11:42:33.524878    1705 log.go:172] (0xc00015c6e0) (0xc0006b45a0) Stream removed, broadcasting: 3\nI0212 11:42:33.525001    1705 log.go:172] (0xc000704780) (1) Data frame handling\nI0212 11:42:33.525048    1705 log.go:172] (0xc00015c6e0) (0xc000704820) Stream removed, broadcasting: 5\nI0212 11:42:33.525101    1705 log.go:172] (0xc000704780) (1) Data frame sent\nI0212 11:42:33.525124    1705 log.go:172] (0xc00015c6e0) (0xc000704780) Stream removed, broadcasting: 1\nI0212 11:42:33.525163    1705 log.go:172] (0xc00015c6e0) Go away received\nI0212 11:42:33.525968    1705 log.go:172] (0xc00015c6e0) (0xc000704780) Stream removed, broadcasting: 1\nI0212 11:42:33.525986    1705 log.go:172] (0xc00015c6e0) (0xc0006b45a0) Stream removed, broadcasting: 3\nI0212 11:42:33.526003    1705 log.go:172] (0xc00015c6e0) (0xc000704820) Stream removed, broadcasting: 5\n"
Feb 12 11:42:33.541: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 12 11:42:33.541: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 12 11:42:33.614: INFO: Found 2 stateful pods, waiting for 3
Feb 12 11:42:43.692: INFO: Found 2 stateful pods, waiting for 3
Feb 12 11:42:53.641: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 11:42:53.642: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 11:42:53.642: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 12 11:43:03.651: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 11:43:03.651: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 11:43:03.651: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 12 11:43:03.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 12 11:43:04.627: INFO: stderr: "I0212 11:43:04.005699    1727 log.go:172] (0xc000742370) (0xc00076c640) Create stream\nI0212 11:43:04.006286    1727 log.go:172] (0xc000742370) (0xc00076c640) Stream added, broadcasting: 1\nI0212 11:43:04.021528    1727 log.go:172] (0xc000742370) Reply frame received for 1\nI0212 11:43:04.021622    1727 log.go:172] (0xc000742370) (0xc00076c6e0) Create stream\nI0212 11:43:04.021647    1727 log.go:172] (0xc000742370) (0xc00076c6e0) Stream added, broadcasting: 3\nI0212 11:43:04.023327    1727 log.go:172] (0xc000742370) Reply frame received for 3\nI0212 11:43:04.023363    1727 log.go:172] (0xc000742370) (0xc0001a4d20) Create stream\nI0212 11:43:04.023409    1727 log.go:172] (0xc000742370) (0xc0001a4d20) Stream added, broadcasting: 5\nI0212 11:43:04.026683    1727 log.go:172] (0xc000742370) Reply frame received for 5\nI0212 11:43:04.273746    1727 log.go:172] (0xc000742370) Data frame received for 3\nI0212 11:43:04.273831    1727 log.go:172] (0xc00076c6e0) (3) Data frame handling\nI0212 11:43:04.273851    1727 log.go:172] (0xc00076c6e0) (3) Data frame sent\nI0212 11:43:04.609760    1727 log.go:172] (0xc000742370) Data frame received for 1\nI0212 11:43:04.609937    1727 log.go:172] (0xc000742370) (0xc00076c6e0) Stream removed, broadcasting: 3\nI0212 11:43:04.610117    1727 log.go:172] (0xc00076c640) (1) Data frame handling\nI0212 11:43:04.610226    1727 log.go:172] (0xc000742370) (0xc0001a4d20) Stream removed, broadcasting: 5\nI0212 11:43:04.610315    1727 log.go:172] (0xc00076c640) (1) Data frame sent\nI0212 11:43:04.610333    1727 log.go:172] (0xc000742370) (0xc00076c640) Stream removed, broadcasting: 1\nI0212 11:43:04.610374    1727 log.go:172] (0xc000742370) Go away received\nI0212 11:43:04.611214    1727 log.go:172] (0xc000742370) (0xc00076c640) Stream removed, broadcasting: 1\nI0212 11:43:04.611242    1727 log.go:172] (0xc000742370) (0xc00076c6e0) Stream removed, broadcasting: 3\nI0212 11:43:04.611262    1727 log.go:172] (0xc000742370) (0xc0001a4d20) Stream removed, broadcasting: 5\n"
Feb 12 11:43:04.628: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 12 11:43:04.628: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 12 11:43:04.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 12 11:43:05.140: INFO: stderr: "I0212 11:43:04.886510    1749 log.go:172] (0xc0001386e0) (0xc000722640) Create stream\nI0212 11:43:04.886692    1749 log.go:172] (0xc0001386e0) (0xc000722640) Stream added, broadcasting: 1\nI0212 11:43:04.891551    1749 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0212 11:43:04.891576    1749 log.go:172] (0xc0001386e0) (0xc0007226e0) Create stream\nI0212 11:43:04.891581    1749 log.go:172] (0xc0001386e0) (0xc0007226e0) Stream added, broadcasting: 3\nI0212 11:43:04.892439    1749 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0212 11:43:04.892486    1749 log.go:172] (0xc0001386e0) (0xc000630e60) Create stream\nI0212 11:43:04.892515    1749 log.go:172] (0xc0001386e0) (0xc000630e60) Stream added, broadcasting: 5\nI0212 11:43:04.896326    1749 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0212 11:43:05.028542    1749 log.go:172] (0xc0001386e0) Data frame received for 3\nI0212 11:43:05.028660    1749 log.go:172] (0xc0007226e0) (3) Data frame handling\nI0212 11:43:05.028684    1749 log.go:172] (0xc0007226e0) (3) Data frame sent\nI0212 11:43:05.132637    1749 log.go:172] (0xc0001386e0) (0xc000630e60) Stream removed, broadcasting: 5\nI0212 11:43:05.132796    1749 log.go:172] (0xc0001386e0) Data frame received for 1\nI0212 11:43:05.132814    1749 log.go:172] (0xc000722640) (1) Data frame handling\nI0212 11:43:05.132833    1749 log.go:172] (0xc0001386e0) (0xc0007226e0) Stream removed, broadcasting: 3\nI0212 11:43:05.132867    1749 log.go:172] (0xc000722640) (1) Data frame sent\nI0212 11:43:05.132880    1749 log.go:172] (0xc0001386e0) (0xc000722640) Stream removed, broadcasting: 1\nI0212 11:43:05.132902    1749 log.go:172] (0xc0001386e0) Go away received\nI0212 11:43:05.133284    1749 log.go:172] (0xc0001386e0) (0xc000722640) Stream removed, broadcasting: 1\nI0212 11:43:05.133301    1749 log.go:172] (0xc0001386e0) (0xc0007226e0) Stream removed, broadcasting: 3\nI0212 11:43:05.133310    1749 log.go:172] (0xc0001386e0) (0xc000630e60) Stream removed, broadcasting: 5\n"
Feb 12 11:43:05.141: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 12 11:43:05.141: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 12 11:43:05.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 12 11:43:05.589: INFO: stderr: "I0212 11:43:05.324270    1770 log.go:172] (0xc00014c0b0) (0xc00069a000) Create stream\nI0212 11:43:05.324508    1770 log.go:172] (0xc00014c0b0) (0xc00069a000) Stream added, broadcasting: 1\nI0212 11:43:05.331710    1770 log.go:172] (0xc00014c0b0) Reply frame received for 1\nI0212 11:43:05.331805    1770 log.go:172] (0xc00014c0b0) (0xc000526be0) Create stream\nI0212 11:43:05.331827    1770 log.go:172] (0xc00014c0b0) (0xc000526be0) Stream added, broadcasting: 3\nI0212 11:43:05.333554    1770 log.go:172] (0xc00014c0b0) Reply frame received for 3\nI0212 11:43:05.333591    1770 log.go:172] (0xc00014c0b0) (0xc0007da000) Create stream\nI0212 11:43:05.333604    1770 log.go:172] (0xc00014c0b0) (0xc0007da000) Stream added, broadcasting: 5\nI0212 11:43:05.334964    1770 log.go:172] (0xc00014c0b0) Reply frame received for 5\nI0212 11:43:05.458340    1770 log.go:172] (0xc00014c0b0) Data frame received for 3\nI0212 11:43:05.458456    1770 log.go:172] (0xc000526be0) (3) Data frame handling\nI0212 11:43:05.458479    1770 log.go:172] (0xc000526be0) (3) Data frame sent\nI0212 11:43:05.578540    1770 log.go:172] (0xc00014c0b0) Data frame received for 1\nI0212 11:43:05.578724    1770 log.go:172] (0xc00014c0b0) (0xc000526be0) Stream removed, broadcasting: 3\nI0212 11:43:05.578769    1770 log.go:172] (0xc00069a000) (1) Data frame handling\nI0212 11:43:05.578806    1770 log.go:172] (0xc00069a000) (1) Data frame sent\nI0212 11:43:05.578875    1770 log.go:172] (0xc00014c0b0) (0xc00069a000) Stream removed, broadcasting: 1\nI0212 11:43:05.579277    1770 log.go:172] (0xc00014c0b0) (0xc0007da000) Stream removed, broadcasting: 5\nI0212 11:43:05.579331    1770 log.go:172] (0xc00014c0b0) Go away received\nI0212 11:43:05.579627    1770 log.go:172] (0xc00014c0b0) (0xc00069a000) Stream removed, broadcasting: 1\nI0212 11:43:05.579649    1770 log.go:172] (0xc00014c0b0) (0xc000526be0) Stream removed, broadcasting: 3\nI0212 11:43:05.579664    1770 log.go:172] (0xc00014c0b0) (0xc0007da000) Stream removed, broadcasting: 5\n"
Feb 12 11:43:05.589: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 12 11:43:05.589: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 12 11:43:05.589: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 11:43:05.601: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb 12 11:43:15.625: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 12 11:43:15.625: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 12 11:43:15.625: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 12 11:43:15.682: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999249s
Feb 12 11:43:16.704: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.976591499s
Feb 12 11:43:17.723: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.95463352s
Feb 12 11:43:18.748: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.93561393s
Feb 12 11:43:19.764: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.910841313s
Feb 12 11:43:20.776: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.894088591s
Feb 12 11:43:22.426: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.882437386s
Feb 12 11:43:23.459: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.232353459s
Feb 12 11:43:24.482: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.198924723s
Feb 12 11:43:25.506: INFO: Verifying statefulset ss doesn't scale past 3 for another 176.551877ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-pn7sb
Feb 12 11:43:26.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:43:27.140: INFO: stderr: "I0212 11:43:26.834936    1791 log.go:172] (0xc000708370) (0xc00072e640) Create stream\nI0212 11:43:26.835267    1791 log.go:172] (0xc000708370) (0xc00072e640) Stream added, broadcasting: 1\nI0212 11:43:26.842853    1791 log.go:172] (0xc000708370) Reply frame received for 1\nI0212 11:43:26.842940    1791 log.go:172] (0xc000708370) (0xc00072e6e0) Create stream\nI0212 11:43:26.842960    1791 log.go:172] (0xc000708370) (0xc00072e6e0) Stream added, broadcasting: 3\nI0212 11:43:26.845006    1791 log.go:172] (0xc000708370) Reply frame received for 3\nI0212 11:43:26.845045    1791 log.go:172] (0xc000708370) (0xc0005a4c80) Create stream\nI0212 11:43:26.845054    1791 log.go:172] (0xc000708370) (0xc0005a4c80) Stream added, broadcasting: 5\nI0212 11:43:26.846530    1791 log.go:172] (0xc000708370) Reply frame received for 5\nI0212 11:43:26.988748    1791 log.go:172] (0xc000708370) Data frame received for 3\nI0212 11:43:26.988925    1791 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0212 11:43:26.988959    1791 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0212 11:43:27.127179    1791 log.go:172] (0xc000708370) (0xc00072e6e0) Stream removed, broadcasting: 3\nI0212 11:43:27.127317    1791 log.go:172] (0xc000708370) Data frame received for 1\nI0212 11:43:27.127370    1791 log.go:172] (0xc00072e640) (1) Data frame handling\nI0212 11:43:27.127417    1791 log.go:172] (0xc00072e640) (1) Data frame sent\nI0212 11:43:27.127429    1791 log.go:172] (0xc000708370) (0xc0005a4c80) Stream removed, broadcasting: 5\nI0212 11:43:27.127492    1791 log.go:172] (0xc000708370) (0xc00072e640) Stream removed, broadcasting: 1\nI0212 11:43:27.127517    1791 log.go:172] (0xc000708370) Go away received\nI0212 11:43:27.128151    1791 log.go:172] (0xc000708370) (0xc00072e640) Stream removed, broadcasting: 1\nI0212 11:43:27.128173    1791 log.go:172] (0xc000708370) (0xc00072e6e0) Stream removed, broadcasting: 3\nI0212 11:43:27.128181    1791 log.go:172] (0xc000708370) (0xc0005a4c80) Stream removed, broadcasting: 5\n"
Feb 12 11:43:27.140: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 12 11:43:27.140: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 12 11:43:27.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:43:27.924: INFO: stderr: "I0212 11:43:27.365526    1813 log.go:172] (0xc00071c370) (0xc00079c640) Create stream\nI0212 11:43:27.365779    1813 log.go:172] (0xc00071c370) (0xc00079c640) Stream added, broadcasting: 1\nI0212 11:43:27.373079    1813 log.go:172] (0xc00071c370) Reply frame received for 1\nI0212 11:43:27.373151    1813 log.go:172] (0xc00071c370) (0xc00079c6e0) Create stream\nI0212 11:43:27.373165    1813 log.go:172] (0xc00071c370) (0xc00079c6e0) Stream added, broadcasting: 3\nI0212 11:43:27.378283    1813 log.go:172] (0xc00071c370) Reply frame received for 3\nI0212 11:43:27.378308    1813 log.go:172] (0xc00071c370) (0xc000650e60) Create stream\nI0212 11:43:27.378319    1813 log.go:172] (0xc00071c370) (0xc000650e60) Stream added, broadcasting: 5\nI0212 11:43:27.379877    1813 log.go:172] (0xc00071c370) Reply frame received for 5\nI0212 11:43:27.535115    1813 log.go:172] (0xc00071c370) Data frame received for 3\nI0212 11:43:27.535238    1813 log.go:172] (0xc00079c6e0) (3) Data frame handling\nI0212 11:43:27.535258    1813 log.go:172] (0xc00079c6e0) (3) Data frame sent\nI0212 11:43:27.915125    1813 log.go:172] (0xc00071c370) (0xc00079c6e0) Stream removed, broadcasting: 3\nI0212 11:43:27.915305    1813 log.go:172] (0xc00071c370) Data frame received for 1\nI0212 11:43:27.915319    1813 log.go:172] (0xc00079c640) (1) Data frame handling\nI0212 11:43:27.915338    1813 log.go:172] (0xc00071c370) (0xc000650e60) Stream removed, broadcasting: 5\nI0212 11:43:27.915365    1813 log.go:172] (0xc00079c640) (1) Data frame sent\nI0212 11:43:27.915380    1813 log.go:172] (0xc00071c370) (0xc00079c640) Stream removed, broadcasting: 1\nI0212 11:43:27.915392    1813 log.go:172] (0xc00071c370) Go away received\nI0212 11:43:27.916201    1813 log.go:172] (0xc00071c370) (0xc00079c640) Stream removed, broadcasting: 1\nI0212 11:43:27.916209    1813 log.go:172] (0xc00071c370) (0xc00079c6e0) Stream removed, broadcasting: 3\nI0212 11:43:27.916212    1813 log.go:172] (0xc00071c370) (0xc000650e60) Stream removed, broadcasting: 5\n"
Feb 12 11:43:27.925: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 12 11:43:27.925: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 12 11:43:27.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:43:28.234: INFO: rc: 126
Feb 12 11:43:28.234: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   cannot exec in a stopped state: unknown
 I0212 11:43:28.181952    1835 log.go:172] (0xc000730370) (0xc00075a6e0) Create stream
I0212 11:43:28.182303    1835 log.go:172] (0xc000730370) (0xc00075a6e0) Stream added, broadcasting: 1
I0212 11:43:28.190054    1835 log.go:172] (0xc000730370) Reply frame received for 1
I0212 11:43:28.190104    1835 log.go:172] (0xc000730370) (0xc00064ec80) Create stream
I0212 11:43:28.190110    1835 log.go:172] (0xc000730370) (0xc00064ec80) Stream added, broadcasting: 3
I0212 11:43:28.191989    1835 log.go:172] (0xc000730370) Reply frame received for 3
I0212 11:43:28.192027    1835 log.go:172] (0xc000730370) (0xc0005de000) Create stream
I0212 11:43:28.192036    1835 log.go:172] (0xc000730370) (0xc0005de000) Stream added, broadcasting: 5
I0212 11:43:28.192871    1835 log.go:172] (0xc000730370) Reply frame received for 5
I0212 11:43:28.213995    1835 log.go:172] (0xc000730370) Data frame received for 3
I0212 11:43:28.214031    1835 log.go:172] (0xc00064ec80) (3) Data frame handling
I0212 11:43:28.214053    1835 log.go:172] (0xc00064ec80) (3) Data frame sent
I0212 11:43:28.216995    1835 log.go:172] (0xc000730370) Data frame received for 1
I0212 11:43:28.217013    1835 log.go:172] (0xc000730370) (0xc0005de000) Stream removed, broadcasting: 5
I0212 11:43:28.217050    1835 log.go:172] (0xc00075a6e0) (1) Data frame handling
I0212 11:43:28.217070    1835 log.go:172] (0xc00075a6e0) (1) Data frame sent
I0212 11:43:28.217119    1835 log.go:172] (0xc000730370) (0xc00064ec80) Stream removed, broadcasting: 3
I0212 11:43:28.217153    1835 log.go:172] (0xc000730370) (0xc00075a6e0) Stream removed, broadcasting: 1
I0212 11:43:28.217196    1835 log.go:172] (0xc000730370) Go away received
I0212 11:43:28.217931    1835 log.go:172] (0xc000730370) (0xc00075a6e0) Stream removed, broadcasting: 1
I0212 11:43:28.217945    1835 log.go:172] (0xc000730370) (0xc00064ec80) Stream removed, broadcasting: 3
I0212 11:43:28.217953    1835 log.go:172] (0xc000730370) (0xc0005de000) Stream removed, broadcasting: 5
command terminated with exit code 126
 []  0xc000eeac60 exit status 126   true [0xc0017ca150 0xc0017ca168 0xc0017ca180] [0xc0017ca150 0xc0017ca168 0xc0017ca180] [0xc0017ca160 0xc0017ca178] [0x935700 0x935700] 0xc001183620 }:
Command stdout:
cannot exec in a stopped state: unknown

stderr:
I0212 11:43:28.181952    1835 log.go:172] (0xc000730370) (0xc00075a6e0) Create stream
I0212 11:43:28.182303    1835 log.go:172] (0xc000730370) (0xc00075a6e0) Stream added, broadcasting: 1
I0212 11:43:28.190054    1835 log.go:172] (0xc000730370) Reply frame received for 1
I0212 11:43:28.190104    1835 log.go:172] (0xc000730370) (0xc00064ec80) Create stream
I0212 11:43:28.190110    1835 log.go:172] (0xc000730370) (0xc00064ec80) Stream added, broadcasting: 3
I0212 11:43:28.191989    1835 log.go:172] (0xc000730370) Reply frame received for 3
I0212 11:43:28.192027    1835 log.go:172] (0xc000730370) (0xc0005de000) Create stream
I0212 11:43:28.192036    1835 log.go:172] (0xc000730370) (0xc0005de000) Stream added, broadcasting: 5
I0212 11:43:28.192871    1835 log.go:172] (0xc000730370) Reply frame received for 5
I0212 11:43:28.213995    1835 log.go:172] (0xc000730370) Data frame received for 3
I0212 11:43:28.214031    1835 log.go:172] (0xc00064ec80) (3) Data frame handling
I0212 11:43:28.214053    1835 log.go:172] (0xc00064ec80) (3) Data frame sent
I0212 11:43:28.216995    1835 log.go:172] (0xc000730370) Data frame received for 1
I0212 11:43:28.217013    1835 log.go:172] (0xc000730370) (0xc0005de000) Stream removed, broadcasting: 5
I0212 11:43:28.217050    1835 log.go:172] (0xc00075a6e0) (1) Data frame handling
I0212 11:43:28.217070    1835 log.go:172] (0xc00075a6e0) (1) Data frame sent
I0212 11:43:28.217119    1835 log.go:172] (0xc000730370) (0xc00064ec80) Stream removed, broadcasting: 3
I0212 11:43:28.217153    1835 log.go:172] (0xc000730370) (0xc00075a6e0) Stream removed, broadcasting: 1
I0212 11:43:28.217196    1835 log.go:172] (0xc000730370) Go away received
I0212 11:43:28.217931    1835 log.go:172] (0xc000730370) (0xc00075a6e0) Stream removed, broadcasting: 1
I0212 11:43:28.217945    1835 log.go:172] (0xc000730370) (0xc00064ec80) Stream removed, broadcasting: 3
I0212 11:43:28.217953    1835 log.go:172] (0xc000730370) (0xc0005de000) Stream removed, broadcasting: 5
command terminated with exit code 126

error:
exit status 126

Feb 12 11:43:38.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:43:38.731: INFO: rc: 1
Feb 12 11:43:38.732: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0018b6630 exit status 1   true [0xc00259a1d0 0xc00259a1e8 0xc00259a200] [0xc00259a1d0 0xc00259a1e8 0xc00259a200] [0xc00259a1e0 0xc00259a1f8] [0x935700 0x935700] 0xc0022ff260 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb 12 11:43:48.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:43:48.863: INFO: rc: 1
Feb 12 11:43:48.864: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000e62bd0 exit status 1   true [0xc0004b1370 0xc0004b1400 0xc0004b1458] [0xc0004b1370 0xc0004b1400 0xc0004b1458] [0xc0004b13d0 0xc0004b1420] [0x935700 0x935700] 0xc001f73a40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:43:58.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:43:59.035: INFO: rc: 1
Feb 12 11:43:59.036: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000e62d50 exit status 1   true [0xc0004b1490 0xc0004b14f8 0xc0004b1568] [0xc0004b1490 0xc0004b14f8 0xc0004b1568] [0xc0004b14f0 0xc0004b1538] [0x935700 0x935700] 0xc001f73ce0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:44:09.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:44:09.200: INFO: rc: 1
Feb 12 11:44:09.201: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0018b6780 exit status 1   true [0xc00259a208 0xc00259a220 0xc00259a238] [0xc00259a208 0xc00259a220 0xc00259a238] [0xc00259a218 0xc00259a230] [0x935700 0x935700] 0xc0022ff5c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:44:19.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:44:19.370: INFO: rc: 1
Feb 12 11:44:19.371: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000eeb080 exit status 1   true [0xc0017ca188 0xc0017ca1a0 0xc0017ca1b8] [0xc0017ca188 0xc0017ca1a0 0xc0017ca1b8] [0xc0017ca198 0xc0017ca1b0] [0x935700 0x935700] 0xc0011838c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:44:29.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:44:29.524: INFO: rc: 1
Feb 12 11:44:29.524: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000e62ea0 exit status 1   true [0xc0004b1580 0xc0004b1618 0xc0004b1660] [0xc0004b1580 0xc0004b1618 0xc0004b1660] [0xc0004b15e8 0xc0004b1658] [0x935700 0x935700] 0xc001f73f80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:44:39.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:44:39.687: INFO: rc: 1
Feb 12 11:44:39.687: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc000e62fc0 exit status 1   true [0xc0004b1668 0xc0004b16f0 0xc0004b17d8] [0xc0004b1668 0xc0004b16f0 0xc0004b17d8] [0xc0004b16b0 0xc0004b17d0] [0x935700 0x935700] 0xc001f1a240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:44:49.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:44:49.892: INFO: rc: 1
Feb 12 11:44:49.892: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002574120 exit status 1   true [0xc0017ca000 0xc0017ca018 0xc0017ca030] [0xc0017ca000 0xc0017ca018 0xc0017ca030] [0xc0017ca010 0xc0017ca028] [0x935700 0x935700] 0xc001f73740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:44:59.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:45:00.074: INFO: rc: 1
Feb 12 11:45:00.075: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0025821b0 exit status 1   true [0xc00259a010 0xc00259a048 0xc00259a060] [0xc00259a010 0xc00259a048 0xc00259a060] [0xc00259a040 0xc00259a058] [0x935700 0x935700] 0xc001d2e7e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:45:10.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:45:10.241: INFO: rc: 1
Feb 12 11:45:10.241: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f3e120 exit status 1   true [0xc0004b0bf0 0xc0004b0d18 0xc0004b0f28] [0xc0004b0bf0 0xc0004b0d18 0xc0004b0f28] [0xc0004b0c58 0xc0004b0e68] [0x935700 0x935700] 0xc001f1a1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:45:20.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:45:20.460: INFO: rc: 1
Feb 12 11:45:20.461: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0018b6120 exit status 1   true [0xc0013ce000 0xc0013ce018 0xc0013ce030] [0xc0013ce000 0xc0013ce018 0xc0013ce030] [0xc0013ce010 0xc0013ce028] [0x935700 0x935700] 0xc0022fe2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:45:30.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:45:30.659: INFO: rc: 1
Feb 12 11:45:30.660: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002582330 exit status 1   true [0xc00259a068 0xc00259a080 0xc00259a098] [0xc00259a068 0xc00259a080 0xc00259a098] [0xc00259a078 0xc00259a090] [0x935700 0x935700] 0xc001d2f0e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:45:40.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:45:40.841: INFO: rc: 1
Feb 12 11:45:40.841: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f3e2a0 exit status 1   true [0xc0004b0f58 0xc0004b1038 0xc0004b1070] [0xc0004b0f58 0xc0004b1038 0xc0004b1070] [0xc0004b1020 0xc0004b1050] [0x935700 0x935700] 0xc001f1a480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:45:50.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:45:50.985: INFO: rc: 1
Feb 12 11:45:50.985: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f3e3c0 exit status 1   true [0xc0004b1098 0xc0004b10d8 0xc0004b1188] [0xc0004b1098 0xc0004b10d8 0xc0004b1188] [0xc0004b10b0 0xc0004b1170] [0x935700 0x935700] 0xc001f1a720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:46:00.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:46:01.177: INFO: rc: 1
Feb 12 11:46:01.177: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002582480 exit status 1   true [0xc00259a0a0 0xc00259a0b8 0xc00259a0d0] [0xc00259a0a0 0xc00259a0b8 0xc00259a0d0] [0xc00259a0b0 0xc00259a0c8] [0x935700 0x935700] 0xc001d2f860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:46:11.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:46:11.339: INFO: rc: 1
Feb 12 11:46:11.339: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f3e540 exit status 1   true [0xc0004b1190 0xc0004b1248 0xc0004b12c8] [0xc0004b1190 0xc0004b1248 0xc0004b12c8] [0xc0004b11b0 0xc0004b12a0] [0x935700 0x935700] 0xc001f1ae40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:46:21.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:46:21.485: INFO: rc: 1
Feb 12 11:46:21.486: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0025825d0 exit status 1   true [0xc00259a0d8 0xc00259a0f0 0xc00259a108] [0xc00259a0d8 0xc00259a0f0 0xc00259a108] [0xc00259a0e8 0xc00259a100] [0x935700 0x935700] 0xc001d2fd40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:46:31.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:46:31.703: INFO: rc: 1
Feb 12 11:46:31.703: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0018b62a0 exit status 1   true [0xc0013ce038 0xc0013ce050 0xc0013ce068] [0xc0013ce038 0xc0013ce050 0xc0013ce068] [0xc0013ce048 0xc0013ce060] [0x935700 0x935700] 0xc0022fe540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:46:41.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:46:41.903: INFO: rc: 1
Feb 12 11:46:41.903: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f3e660 exit status 1   true [0xc0004b12e0 0xc0004b1318 0xc0004b13d0] [0xc0004b12e0 0xc0004b1318 0xc0004b13d0] [0xc0004b1300 0xc0004b13b0] [0x935700 0x935700] 0xc001f1b620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:46:51.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:46:52.066: INFO: rc: 1
Feb 12 11:46:52.067: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002574150 exit status 1   true [0xc00259a010 0xc00259a048 0xc00259a060] [0xc00259a010 0xc00259a048 0xc00259a060] [0xc00259a040 0xc00259a058] [0x935700 0x935700] 0xc001f73740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:47:02.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:47:02.284: INFO: rc: 1
Feb 12 11:47:02.285: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f3e150 exit status 1   true [0xc0017ca000 0xc0017ca018 0xc0017ca030] [0xc0017ca000 0xc0017ca018 0xc0017ca030] [0xc0017ca010 0xc0017ca028] [0x935700 0x935700] 0xc001d2e7e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:47:12.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:47:12.429: INFO: rc: 1
Feb 12 11:47:12.429: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0025742a0 exit status 1   true [0xc00259a068 0xc00259a080 0xc00259a098] [0xc00259a068 0xc00259a080 0xc00259a098] [0xc00259a078 0xc00259a090] [0x935700 0x935700] 0xc001f739e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:47:22.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:47:22.676: INFO: rc: 1
Feb 12 11:47:22.676: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f3e270 exit status 1   true [0xc0017ca040 0xc0017ca058 0xc0017ca070] [0xc0017ca040 0xc0017ca058 0xc0017ca070] [0xc0017ca050 0xc0017ca068] [0x935700 0x935700] 0xc001d2f0e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:47:32.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:47:32.851: INFO: rc: 1
Feb 12 11:47:32.852: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f3e3f0 exit status 1   true [0xc0017ca078 0xc0017ca090 0xc0017ca0a8] [0xc0017ca078 0xc0017ca090 0xc0017ca0a8] [0xc0017ca088 0xc0017ca0a0] [0x935700 0x935700] 0xc001d2f860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:47:42.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:47:43.046: INFO: rc: 1
Feb 12 11:47:43.046: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001f3e510 exit status 1   true [0xc0017ca0b0 0xc0017ca0c8 0xc0017ca0e0] [0xc0017ca0b0 0xc0017ca0c8 0xc0017ca0e0] [0xc0017ca0c0 0xc0017ca0d8] [0x935700 0x935700] 0xc001d2fd40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:47:53.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:47:53.215: INFO: rc: 1
Feb 12 11:47:53.215: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002582270 exit status 1   true [0xc0004b0bf0 0xc0004b0d18 0xc0004b0f28] [0xc0004b0bf0 0xc0004b0d18 0xc0004b0f28] [0xc0004b0c58 0xc0004b0e68] [0x935700 0x935700] 0xc001f1a1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:48:03.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:48:03.540: INFO: rc: 1
Feb 12 11:48:03.540: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0018b6150 exit status 1   true [0xc0013ce000 0xc0013ce018 0xc0013ce030] [0xc0013ce000 0xc0013ce018 0xc0013ce030] [0xc0013ce010 0xc0013ce028] [0x935700 0x935700] 0xc0022fe2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:48:13.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:48:13.742: INFO: rc: 1
Feb 12 11:48:13.743: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002574420 exit status 1   true [0xc00259a0a0 0xc00259a0b8 0xc00259a0d0] [0xc00259a0a0 0xc00259a0b8 0xc00259a0d0] [0xc00259a0b0 0xc00259a0c8] [0x935700 0x935700] 0xc001f73c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:48:23.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:48:24.037: INFO: rc: 1
Feb 12 11:48:24.038: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0018b62d0 exit status 1   true [0xc0013ce038 0xc0013ce050 0xc0013ce068] [0xc0013ce038 0xc0013ce050 0xc0013ce068] [0xc0013ce048 0xc0013ce060] [0x935700 0x935700] 0xc0022fe540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1

Feb 12 11:48:34.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn7sb ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 11:48:34.588: INFO: rc: 1
Feb 12 11:48:34.589: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Feb 12 11:48:34.589: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 12 11:48:34.774: INFO: Deleting all statefulset in ns e2e-tests-statefulset-pn7sb
Feb 12 11:48:34.780: INFO: Scaling statefulset ss to 0
Feb 12 11:48:34.792: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 11:48:34.802: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:48:34.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-pn7sb" for this suite.
Feb 12 11:48:43.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 11:48:43.145: INFO: namespace: e2e-tests-statefulset-pn7sb, resource: bindings, ignored listing per whitelist
Feb 12 11:48:43.192: INFO: namespace e2e-tests-statefulset-pn7sb deletion completed in 8.22303908s

• [SLOW TEST:401.773 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 11:48:43.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb 12 11:48:43.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lj88b'
Feb 12 11:48:43.832: INFO: stderr: ""
Feb 12 11:48:43.832: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 12 11:48:45.518: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 11:48:45.518: INFO: Found 0 / 1
Feb 12 11:48:46.134: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 11:48:46.135: INFO: Found 0 / 1
Feb 12 11:48:46.841: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 11:48:46.841: INFO: Found 0 / 1
Feb 12 11:48:47.874: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 11:48:47.875: INFO: Found 0 / 1
Feb 12 11:48:49.431: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 11:48:49.431: INFO: Found 0 / 1
Feb 12 11:48:50.449: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 11:48:50.449: INFO: Found 0 / 1
Feb 12 11:48:50.899: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 11:48:50.900: INFO: Found 0 / 1
Feb 12 11:48:51.925: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 11:48:51.925: INFO: Found 0 / 1
Feb 12 11:48:52.867: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 11:48:52.867: INFO: Found 0 / 1
Feb 12 11:48:53.853: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 11:48:53.854: INFO: Found 1 / 1
Feb 12 11:48:53.854: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 12 11:48:53.869: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 11:48:53.869: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 12 11:48:53.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-wdzvp --namespace=e2e-tests-kubectl-lj88b -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 12 11:48:54.242: INFO: stderr: ""
Feb 12 11:48:54.243: INFO: stdout: "pod/redis-master-wdzvp patched\n"
STEP: checking annotations
Feb 12 11:48:54.348: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 11:48:54.348: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:48:54.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lj88b" for this suite.
Feb 12 11:49:20.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 11:49:20.546: INFO: namespace: e2e-tests-kubectl-lj88b, resource: bindings, ignored listing per whitelist
Feb 12 11:49:20.710: INFO: namespace e2e-tests-kubectl-lj88b deletion completed in 26.336238119s

• [SLOW TEST:37.517 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 11:49:20.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 12 11:49:43.254: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 11:49:43.276: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 11:49:45.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 11:49:45.380: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 11:49:47.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 11:49:47.294: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 11:49:49.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 11:49:49.289: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 11:49:51.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 11:49:51.284: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 11:49:53.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 11:49:53.300: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 11:49:55.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 11:49:55.298: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 11:49:57.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 11:49:57.292: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 11:49:59.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 11:49:59.294: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 11:50:01.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 11:50:01.292: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 11:50:03.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 11:50:03.299: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 11:50:05.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 11:50:05.293: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 11:50:07.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 11:50:07.289: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 11:50:09.276: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 11:50:09.293: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 11:50:11.277: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 11:50:11.292: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 12 11:50:13.278: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 12 11:50:13.298: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:50:13.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-5s4qs" for this suite.
Feb 12 11:50:37.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 11:50:37.565: INFO: namespace: e2e-tests-container-lifecycle-hook-5s4qs, resource: bindings, ignored listing per whitelist
Feb 12 11:50:37.593: INFO: namespace e2e-tests-container-lifecycle-hook-5s4qs deletion completed in 24.25469925s

• [SLOW TEST:76.882 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 11:50:37.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Feb 12 11:50:37.803: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-vf7pz" to be "success or failure"
Feb 12 11:50:37.826: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 22.672931ms
Feb 12 11:50:39.842: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038709534s
Feb 12 11:50:41.866: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062636232s
Feb 12 11:50:44.570: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.766363839s
Feb 12 11:50:46.723: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.919377361s
Feb 12 11:50:48.741: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.937560911s
Feb 12 11:50:50.835: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.031524964s
Feb 12 11:50:53.019: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.215872364s
STEP: Saw pod success
Feb 12 11:50:53.020: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 12 11:50:53.030: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 12 11:50:53.195: INFO: Waiting for pod pod-host-path-test to disappear
Feb 12 11:50:53.207: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:50:53.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-vf7pz" for this suite.
Feb 12 11:50:59.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 11:50:59.381: INFO: namespace: e2e-tests-hostpath-vf7pz, resource: bindings, ignored listing per whitelist
Feb 12 11:50:59.409: INFO: namespace e2e-tests-hostpath-vf7pz deletion completed in 6.18960736s

• [SLOW TEST:21.816 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 11:50:59.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-rc6wm
Feb 12 11:51:09.665: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-rc6wm
STEP: checking the pod's current state and verifying that restartCount is present
Feb 12 11:51:09.671: INFO: Initial restart count of pod liveness-exec is 0
Feb 12 11:52:00.904: INFO: Restart count of pod e2e-tests-container-probe-rc6wm/liveness-exec is now 1 (51.232612614s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:52:00.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-rc6wm" for this suite.
Feb 12 11:52:09.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 11:52:09.270: INFO: namespace: e2e-tests-container-probe-rc6wm, resource: bindings, ignored listing per whitelist
Feb 12 11:52:09.294: INFO: namespace e2e-tests-container-probe-rc6wm deletion completed in 8.250662453s

• [SLOW TEST:69.884 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 11:52:09.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-1972e850-4d8e-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume secrets
Feb 12 11:52:09.612: INFO: Waiting up to 5m0s for pod "pod-secrets-1974e80f-4d8e-11ea-b4b9-0242ac110005" in namespace "e2e-tests-secrets-nps8r" to be "success or failure"
Feb 12 11:52:09.619: INFO: Pod "pod-secrets-1974e80f-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.752516ms
Feb 12 11:52:11.634: INFO: Pod "pod-secrets-1974e80f-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022852842s
Feb 12 11:52:13.662: INFO: Pod "pod-secrets-1974e80f-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050008601s
Feb 12 11:52:16.742: INFO: Pod "pod-secrets-1974e80f-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.130806038s
Feb 12 11:52:18.770: INFO: Pod "pod-secrets-1974e80f-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.158308709s
Feb 12 11:52:20.799: INFO: Pod "pod-secrets-1974e80f-4d8e-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.187378713s
STEP: Saw pod success
Feb 12 11:52:20.799: INFO: Pod "pod-secrets-1974e80f-4d8e-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 11:52:20.837: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-1974e80f-4d8e-11ea-b4b9-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb 12 11:52:21.008: INFO: Waiting for pod pod-secrets-1974e80f-4d8e-11ea-b4b9-0242ac110005 to disappear
Feb 12 11:52:21.018: INFO: Pod pod-secrets-1974e80f-4d8e-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:52:21.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-nps8r" for this suite.
Feb 12 11:52:27.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 11:52:27.262: INFO: namespace: e2e-tests-secrets-nps8r, resource: bindings, ignored listing per whitelist
Feb 12 11:52:27.313: INFO: namespace e2e-tests-secrets-nps8r deletion completed in 6.278426688s

• [SLOW TEST:18.018 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 11:52:27.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 12 11:52:27.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-fq26d'
Feb 12 11:52:30.461: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 12 11:52:30.461: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb 12 11:52:30.540: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb 12 11:52:30.558: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb 12 11:52:30.687: INFO: scanned /root for discovery docs: 
Feb 12 11:52:30.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-fq26d'
Feb 12 11:52:55.452: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 12 11:52:55.452: INFO: stdout: "Created e2e-test-nginx-rc-3aaf99657847905f06406f7f76b75c02\nScaling up e2e-test-nginx-rc-3aaf99657847905f06406f7f76b75c02 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-3aaf99657847905f06406f7f76b75c02 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-3aaf99657847905f06406f7f76b75c02 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb 12 11:52:55.452: INFO: stdout: "Created e2e-test-nginx-rc-3aaf99657847905f06406f7f76b75c02\nScaling up e2e-test-nginx-rc-3aaf99657847905f06406f7f76b75c02 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-3aaf99657847905f06406f7f76b75c02 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-3aaf99657847905f06406f7f76b75c02 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb 12 11:52:55.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-fq26d'
Feb 12 11:52:55.657: INFO: stderr: ""
Feb 12 11:52:55.657: INFO: stdout: "e2e-test-nginx-rc-3aaf99657847905f06406f7f76b75c02-bzh6t "
Feb 12 11:52:55.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-3aaf99657847905f06406f7f76b75c02-bzh6t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fq26d'
Feb 12 11:52:55.809: INFO: stderr: ""
Feb 12 11:52:55.809: INFO: stdout: "true"
Feb 12 11:52:55.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-3aaf99657847905f06406f7f76b75c02-bzh6t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-fq26d'
Feb 12 11:52:56.009: INFO: stderr: ""
Feb 12 11:52:56.010: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb 12 11:52:56.010: INFO: e2e-test-nginx-rc-3aaf99657847905f06406f7f76b75c02-bzh6t is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Feb 12 11:52:56.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-fq26d'
Feb 12 11:52:56.167: INFO: stderr: ""
Feb 12 11:52:56.167: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:52:56.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fq26d" for this suite.
Feb 12 11:53:20.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 11:53:20.249: INFO: namespace: e2e-tests-kubectl-fq26d, resource: bindings, ignored listing per whitelist
Feb 12 11:53:20.432: INFO: namespace e2e-tests-kubectl-fq26d deletion completed in 24.245475144s

• [SLOW TEST:53.119 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 11:53:20.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 12 11:53:20.825: INFO: Waiting up to 5m0s for pod "downwardapi-volume-43e44f02-4d8e-11ea-b4b9-0242ac110005" in namespace "e2e-tests-downward-api-w8rbl" to be "success or failure"
Feb 12 11:53:20.837: INFO: Pod "downwardapi-volume-43e44f02-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.533479ms
Feb 12 11:53:22.880: INFO: Pod "downwardapi-volume-43e44f02-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054918081s
Feb 12 11:53:24.899: INFO: Pod "downwardapi-volume-43e44f02-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074196569s
Feb 12 11:53:27.089: INFO: Pod "downwardapi-volume-43e44f02-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.263916959s
Feb 12 11:53:29.110: INFO: Pod "downwardapi-volume-43e44f02-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.285576493s
Feb 12 11:53:31.124: INFO: Pod "downwardapi-volume-43e44f02-4d8e-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.29904466s
STEP: Saw pod success
Feb 12 11:53:31.124: INFO: Pod "downwardapi-volume-43e44f02-4d8e-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 11:53:31.138: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-43e44f02-4d8e-11ea-b4b9-0242ac110005 container client-container: 
STEP: delete the pod
Feb 12 11:53:31.293: INFO: Waiting for pod downwardapi-volume-43e44f02-4d8e-11ea-b4b9-0242ac110005 to disappear
Feb 12 11:53:31.356: INFO: Pod downwardapi-volume-43e44f02-4d8e-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:53:31.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-w8rbl" for this suite.
Feb 12 11:53:37.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 11:53:37.590: INFO: namespace: e2e-tests-downward-api-w8rbl, resource: bindings, ignored listing per whitelist
Feb 12 11:53:37.599: INFO: namespace e2e-tests-downward-api-w8rbl deletion completed in 6.228604223s

• [SLOW TEST:17.167 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 11:53:37.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 12 11:53:37.895: INFO: Waiting up to 5m0s for pod "pod-4e10a966-4d8e-11ea-b4b9-0242ac110005" in namespace "e2e-tests-emptydir-6j2vz" to be "success or failure"
Feb 12 11:53:37.912: INFO: Pod "pod-4e10a966-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.450925ms
Feb 12 11:53:39.925: INFO: Pod "pod-4e10a966-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030336476s
Feb 12 11:53:42.569: INFO: Pod "pod-4e10a966-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.67420692s
Feb 12 11:53:44.632: INFO: Pod "pod-4e10a966-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.73689358s
Feb 12 11:53:46.659: INFO: Pod "pod-4e10a966-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.764400033s
Feb 12 11:53:48.950: INFO: Pod "pod-4e10a966-4d8e-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.055653169s
STEP: Saw pod success
Feb 12 11:53:48.951: INFO: Pod "pod-4e10a966-4d8e-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 11:53:48.964: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4e10a966-4d8e-11ea-b4b9-0242ac110005 container test-container: 
STEP: delete the pod
Feb 12 11:53:49.456: INFO: Waiting for pod pod-4e10a966-4d8e-11ea-b4b9-0242ac110005 to disappear
Feb 12 11:53:49.470: INFO: Pod pod-4e10a966-4d8e-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:53:49.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6j2vz" for this suite.
Feb 12 11:53:55.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 11:53:55.694: INFO: namespace: e2e-tests-emptydir-6j2vz, resource: bindings, ignored listing per whitelist
Feb 12 11:53:55.723: INFO: namespace e2e-tests-emptydir-6j2vz deletion completed in 6.242889309s

• [SLOW TEST:18.124 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 11:53:55.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 12 11:53:55.999: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58dd3b40-4d8e-11ea-b4b9-0242ac110005" in namespace "e2e-tests-downward-api-mbln2" to be "success or failure"
Feb 12 11:53:56.008: INFO: Pod "downwardapi-volume-58dd3b40-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.313574ms
Feb 12 11:53:58.062: INFO: Pod "downwardapi-volume-58dd3b40-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062342836s
Feb 12 11:54:00.077: INFO: Pod "downwardapi-volume-58dd3b40-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077642502s
Feb 12 11:54:02.092: INFO: Pod "downwardapi-volume-58dd3b40-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093127928s
Feb 12 11:54:04.160: INFO: Pod "downwardapi-volume-58dd3b40-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.161226517s
Feb 12 11:54:06.176: INFO: Pod "downwardapi-volume-58dd3b40-4d8e-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.177019694s
Feb 12 11:54:08.188: INFO: Pod "downwardapi-volume-58dd3b40-4d8e-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.189002786s
STEP: Saw pod success
Feb 12 11:54:08.188: INFO: Pod "downwardapi-volume-58dd3b40-4d8e-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 11:54:08.191: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-58dd3b40-4d8e-11ea-b4b9-0242ac110005 container client-container: 
STEP: delete the pod
Feb 12 11:54:08.345: INFO: Waiting for pod downwardapi-volume-58dd3b40-4d8e-11ea-b4b9-0242ac110005 to disappear
Feb 12 11:54:08.383: INFO: Pod downwardapi-volume-58dd3b40-4d8e-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:54:08.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-mbln2" for this suite.
Feb 12 11:54:14.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 11:54:14.692: INFO: namespace: e2e-tests-downward-api-mbln2, resource: bindings, ignored listing per whitelist
Feb 12 11:54:14.714: INFO: namespace e2e-tests-downward-api-mbln2 deletion completed in 6.259878797s

• [SLOW TEST:18.990 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 11:54:14.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xs5zw
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 12 11:54:14.863: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 12 11:54:47.217: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-xs5zw PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 11:54:47.217: INFO: >>> kubeConfig: /root/.kube/config
I0212 11:54:47.289587       8 log.go:172] (0xc000d9c4d0) (0xc00219c140) Create stream
I0212 11:54:47.289653       8 log.go:172] (0xc000d9c4d0) (0xc00219c140) Stream added, broadcasting: 1
I0212 11:54:47.295261       8 log.go:172] (0xc000d9c4d0) Reply frame received for 1
I0212 11:54:47.295313       8 log.go:172] (0xc000d9c4d0) (0xc001715cc0) Create stream
I0212 11:54:47.295324       8 log.go:172] (0xc000d9c4d0) (0xc001715cc0) Stream added, broadcasting: 3
I0212 11:54:47.296426       8 log.go:172] (0xc000d9c4d0) Reply frame received for 3
I0212 11:54:47.296469       8 log.go:172] (0xc000d9c4d0) (0xc0024ee8c0) Create stream
I0212 11:54:47.296484       8 log.go:172] (0xc000d9c4d0) (0xc0024ee8c0) Stream added, broadcasting: 5
I0212 11:54:47.297796       8 log.go:172] (0xc000d9c4d0) Reply frame received for 5
I0212 11:54:47.453441       8 log.go:172] (0xc000d9c4d0) Data frame received for 3
I0212 11:54:47.453528       8 log.go:172] (0xc001715cc0) (3) Data frame handling
I0212 11:54:47.453577       8 log.go:172] (0xc001715cc0) (3) Data frame sent
I0212 11:54:47.580177       8 log.go:172] (0xc000d9c4d0) (0xc001715cc0) Stream removed, broadcasting: 3
I0212 11:54:47.580350       8 log.go:172] (0xc000d9c4d0) Data frame received for 1
I0212 11:54:47.580409       8 log.go:172] (0xc00219c140) (1) Data frame handling
I0212 11:54:47.580427       8 log.go:172] (0xc000d9c4d0) (0xc0024ee8c0) Stream removed, broadcasting: 5
I0212 11:54:47.580513       8 log.go:172] (0xc00219c140) (1) Data frame sent
I0212 11:54:47.580541       8 log.go:172] (0xc000d9c4d0) (0xc00219c140) Stream removed, broadcasting: 1
I0212 11:54:47.580583       8 log.go:172] (0xc000d9c4d0) Go away received
I0212 11:54:47.580937       8 log.go:172] (0xc000d9c4d0) (0xc00219c140) Stream removed, broadcasting: 1
I0212 11:54:47.580977       8 log.go:172] (0xc000d9c4d0) (0xc001715cc0) Stream removed, broadcasting: 3
I0212 11:54:47.580995       8 log.go:172] (0xc000d9c4d0) (0xc0024ee8c0) Stream removed, broadcasting: 5
Feb 12 11:54:47.581: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:54:47.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-xs5zw" for this suite.
Feb 12 11:55:11.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 11:55:11.831: INFO: namespace: e2e-tests-pod-network-test-xs5zw, resource: bindings, ignored listing per whitelist
Feb 12 11:55:11.834: INFO: namespace e2e-tests-pod-network-test-xs5zw deletion completed in 24.221838345s

• [SLOW TEST:57.121 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 11:55:11.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-865dc81c-4d8e-11ea-b4b9-0242ac110005
STEP: Creating secret with name s-test-opt-upd-865dc8cc-4d8e-11ea-b4b9-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-865dc81c-4d8e-11ea-b4b9-0242ac110005
STEP: Updating secret s-test-opt-upd-865dc8cc-4d8e-11ea-b4b9-0242ac110005
STEP: Creating secret with name s-test-opt-create-865dc8fe-4d8e-11ea-b4b9-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:55:32.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-hkcsg" for this suite.
Feb 12 11:55:56.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 11:55:57.076: INFO: namespace: e2e-tests-secrets-hkcsg, resource: bindings, ignored listing per whitelist
Feb 12 11:55:57.190: INFO: namespace e2e-tests-secrets-hkcsg deletion completed in 24.25894149s

• [SLOW TEST:45.355 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 11:55:57.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 12 11:55:57.447: INFO: Creating deployment "test-recreate-deployment"
Feb 12 11:55:57.457: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 12 11:55:57.473: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Feb 12 11:55:59.686: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 12 11:55:59.699: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 11:56:01.716: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 11:56:03.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 11:56:05.710: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 11:56:07.716: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105357, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 11:56:09.709: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 12 11:56:09.728: INFO: Updating deployment test-recreate-deployment
Feb 12 11:56:09.728: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 12 11:56:10.480: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-hbpf6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hbpf6/deployments/test-recreate-deployment,UID:a143f106-4d8e-11ea-a994-fa163e34d433,ResourceVersion:21416834,Generation:2,CreationTimestamp:2020-02-12 11:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-12 11:56:10 +0000 UTC 2020-02-12 11:56:10 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-12 11:56:10 +0000 UTC 2020-02-12 11:55:57 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb 12 11:56:10.503: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-hbpf6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hbpf6/replicasets/test-recreate-deployment-589c4bfd,UID:a8d2d5d5-4d8e-11ea-a994-fa163e34d433,ResourceVersion:21416832,Generation:1,CreationTimestamp:2020-02-12 11:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment a143f106-4d8e-11ea-a994-fa163e34d433 0xc000e67c4f 0xc000e67c60}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 12 11:56:10.503: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 12 11:56:10.504: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-hbpf6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hbpf6/replicasets/test-recreate-deployment-5bf7f65dc,UID:a147c224-4d8e-11ea-a994-fa163e34d433,ResourceVersion:21416822,Generation:2,CreationTimestamp:2020-02-12 11:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment a143f106-4d8e-11ea-a994-fa163e34d433 0xc000e67d30 0xc000e67d31}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 12 11:56:11.357: INFO: Pod "test-recreate-deployment-589c4bfd-mrcmc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-mrcmc,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-hbpf6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbpf6/pods/test-recreate-deployment-589c4bfd-mrcmc,UID:a8dbc589-4d8e-11ea-a994-fa163e34d433,ResourceVersion:21416835,Generation:0,CreationTimestamp:2020-02-12 11:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd a8d2d5d5-4d8e-11ea-a994-fa163e34d433 0xc000335e3f 0xc000335e50}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94gjk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94gjk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94gjk true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000335f30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000335f90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 11:56:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 11:56:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 11:56:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 11:56:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-12 11:56:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:56:11.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-hbpf6" for this suite.
Feb 12 11:56:17.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 11:56:17.898: INFO: namespace: e2e-tests-deployment-hbpf6, resource: bindings, ignored listing per whitelist
Feb 12 11:56:17.898: INFO: namespace e2e-tests-deployment-hbpf6 deletion completed in 6.511589376s

• [SLOW TEST:20.707 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 11:56:17.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-4v5rg
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-4v5rg to expose endpoints map[]
Feb 12 11:56:18.181: INFO: Get endpoints failed (10.990047ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb 12 11:56:19.198: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-4v5rg exposes endpoints map[] (1.027657783s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-4v5rg
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-4v5rg to expose endpoints map[pod1:[80]]
Feb 12 11:56:25.674: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (6.465905628s elapsed, will retry)
Feb 12 11:56:31.041: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-4v5rg exposes endpoints map[pod1:[80]] (11.832432915s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-4v5rg
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-4v5rg to expose endpoints map[pod1:[80] pod2:[80]]
Feb 12 11:56:35.620: INFO: Unexpected endpoints: found map[ae3b5512-4d8e-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.565947517s elapsed, will retry)
Feb 12 11:56:41.515: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-4v5rg exposes endpoints map[pod1:[80] pod2:[80]] (10.46098507s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-4v5rg
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-4v5rg to expose endpoints map[pod2:[80]]
Feb 12 11:56:42.764: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-4v5rg exposes endpoints map[pod2:[80]] (1.235724008s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-4v5rg
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-4v5rg to expose endpoints map[]
Feb 12 11:56:44.207: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-4v5rg exposes endpoints map[] (1.436018788s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:56:45.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-4v5rg" for this suite.
Feb 12 11:57:10.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 11:57:10.276: INFO: namespace: e2e-tests-services-4v5rg, resource: bindings, ignored listing per whitelist
Feb 12 11:57:10.276: INFO: namespace e2e-tests-services-4v5rg deletion completed in 24.935127214s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:52.378 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 11:57:10.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Feb 12 11:57:10.989: INFO: Waiting up to 5m0s for pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-hmk4z" in namespace "e2e-tests-svcaccounts-w8ztx" to be "success or failure"
Feb 12 11:57:11.005: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-hmk4z": Phase="Pending", Reason="", readiness=false. Elapsed: 15.910216ms
Feb 12 11:57:13.021: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-hmk4z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031592047s
Feb 12 11:57:15.042: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-hmk4z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052884172s
Feb 12 11:57:17.807: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-hmk4z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.818087391s
Feb 12 11:57:19.837: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-hmk4z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.847672192s
Feb 12 11:57:21.856: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-hmk4z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.867277049s
Feb 12 11:57:23.914: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-hmk4z": Phase="Pending", Reason="", readiness=false. Elapsed: 12.924823283s
Feb 12 11:57:25.938: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-hmk4z": Phase="Pending", Reason="", readiness=false. Elapsed: 14.94889196s
Feb 12 11:57:28.116: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-hmk4z": Phase="Pending", Reason="", readiness=false. Elapsed: 17.127297061s
Feb 12 11:57:30.138: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-hmk4z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.148592014s
STEP: Saw pod success
Feb 12 11:57:30.138: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-hmk4z" satisfied condition "success or failure"
Feb 12 11:57:30.143: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-hmk4z container token-test: 
STEP: delete the pod
Feb 12 11:57:30.324: INFO: Waiting for pod pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-hmk4z to disappear
Feb 12 11:57:30.357: INFO: Pod pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-hmk4z no longer exists
STEP: Creating a pod to test consume service account root CA
Feb 12 11:57:30.387: INFO: Waiting up to 5m0s for pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-wsrm9" in namespace "e2e-tests-svcaccounts-w8ztx" to be "success or failure"
Feb 12 11:57:30.671: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-wsrm9": Phase="Pending", Reason="", readiness=false. Elapsed: 283.524238ms
Feb 12 11:57:32.763: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-wsrm9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.375479918s
Feb 12 11:57:34.774: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-wsrm9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.386324904s
Feb 12 11:57:37.187: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-wsrm9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.799299661s
Feb 12 11:57:39.202: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-wsrm9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.814351972s
Feb 12 11:57:41.218: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-wsrm9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.83050269s
Feb 12 11:57:43.349: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-wsrm9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.961664769s
Feb 12 11:57:45.610: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-wsrm9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.222719956s
Feb 12 11:57:47.622: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-wsrm9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.234637533s
STEP: Saw pod success
Feb 12 11:57:47.622: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-wsrm9" satisfied condition "success or failure"
Feb 12 11:57:47.627: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-wsrm9 container root-ca-test: 
STEP: delete the pod
Feb 12 11:57:49.093: INFO: Waiting for pod pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-wsrm9 to disappear
Feb 12 11:57:49.233: INFO: Pod pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-wsrm9 no longer exists
STEP: Creating a pod to test consume service account namespace
Feb 12 11:57:49.278: INFO: Waiting up to 5m0s for pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-j9pxq" in namespace "e2e-tests-svcaccounts-w8ztx" to be "success or failure"
Feb 12 11:57:49.287: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-j9pxq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.724948ms
Feb 12 11:57:51.410: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-j9pxq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131676652s
Feb 12 11:57:53.428: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-j9pxq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148963266s
Feb 12 11:57:56.435: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-j9pxq": Phase="Pending", Reason="", readiness=false. Elapsed: 7.156215665s
Feb 12 11:57:58.453: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-j9pxq": Phase="Pending", Reason="", readiness=false. Elapsed: 9.174736022s
Feb 12 11:58:00.472: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-j9pxq": Phase="Pending", Reason="", readiness=false. Elapsed: 11.193830015s
Feb 12 11:58:02.549: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-j9pxq": Phase="Pending", Reason="", readiness=false. Elapsed: 13.270033963s
Feb 12 11:58:05.716: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-j9pxq": Phase="Pending", Reason="", readiness=false. Elapsed: 16.437887s
Feb 12 11:58:07.735: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-j9pxq": Phase="Pending", Reason="", readiness=false. Elapsed: 18.456379969s
Feb 12 11:58:09.795: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-j9pxq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.51644763s
STEP: Saw pod success
Feb 12 11:58:09.795: INFO: Pod "pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-j9pxq" satisfied condition "success or failure"
Feb 12 11:58:09.803: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-j9pxq container namespace-test: 
STEP: delete the pod
Feb 12 11:58:10.250: INFO: Waiting for pod pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-j9pxq to disappear
Feb 12 11:58:10.285: INFO: Pod pod-service-account-cd16adc5-4d8e-11ea-b4b9-0242ac110005-j9pxq no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:58:10.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-w8ztx" for this suite.
Feb 12 11:58:18.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 11:58:18.788: INFO: namespace: e2e-tests-svcaccounts-w8ztx, resource: bindings, ignored listing per whitelist
Feb 12 11:58:18.788: INFO: namespace e2e-tests-svcaccounts-w8ztx deletion completed in 8.475847606s

• [SLOW TEST:68.511 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 11:58:18.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-r6lg4
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 12 11:58:19.028: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 12 11:58:51.252: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-r6lg4 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 11:58:51.252: INFO: >>> kubeConfig: /root/.kube/config
I0212 11:58:51.357300       8 log.go:172] (0xc00124e580) (0xc001d0f4a0) Create stream
I0212 11:58:51.357413       8 log.go:172] (0xc00124e580) (0xc001d0f4a0) Stream added, broadcasting: 1
I0212 11:58:51.370416       8 log.go:172] (0xc00124e580) Reply frame received for 1
I0212 11:58:51.370502       8 log.go:172] (0xc00124e580) (0xc001c2ef00) Create stream
I0212 11:58:51.370517       8 log.go:172] (0xc00124e580) (0xc001c2ef00) Stream added, broadcasting: 3
I0212 11:58:51.372106       8 log.go:172] (0xc00124e580) Reply frame received for 3
I0212 11:58:51.372143       8 log.go:172] (0xc00124e580) (0xc001b9f0e0) Create stream
I0212 11:58:51.372152       8 log.go:172] (0xc00124e580) (0xc001b9f0e0) Stream added, broadcasting: 5
I0212 11:58:51.373218       8 log.go:172] (0xc00124e580) Reply frame received for 5
I0212 11:58:52.549507       8 log.go:172] (0xc00124e580) Data frame received for 3
I0212 11:58:52.549731       8 log.go:172] (0xc001c2ef00) (3) Data frame handling
I0212 11:58:52.549768       8 log.go:172] (0xc001c2ef00) (3) Data frame sent
I0212 11:58:52.677146       8 log.go:172] (0xc00124e580) Data frame received for 1
I0212 11:58:52.677329       8 log.go:172] (0xc001d0f4a0) (1) Data frame handling
I0212 11:58:52.677373       8 log.go:172] (0xc001d0f4a0) (1) Data frame sent
I0212 11:58:52.677408       8 log.go:172] (0xc00124e580) (0xc001d0f4a0) Stream removed, broadcasting: 1
I0212 11:58:52.677654       8 log.go:172] (0xc00124e580) (0xc001c2ef00) Stream removed, broadcasting: 3
I0212 11:58:52.677982       8 log.go:172] (0xc00124e580) (0xc001b9f0e0) Stream removed, broadcasting: 5
I0212 11:58:52.678071       8 log.go:172] (0xc00124e580) (0xc001d0f4a0) Stream removed, broadcasting: 1
I0212 11:58:52.678085       8 log.go:172] (0xc00124e580) (0xc001c2ef00) Stream removed, broadcasting: 3
I0212 11:58:52.678099       8 log.go:172] (0xc00124e580) (0xc001b9f0e0) Stream removed, broadcasting: 5
Feb 12 11:58:52.678: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:58:52.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-r6lg4" for this suite.
Feb 12 11:59:16.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 11:59:17.016: INFO: namespace: e2e-tests-pod-network-test-r6lg4, resource: bindings, ignored listing per whitelist
Feb 12 11:59:17.063: INFO: namespace e2e-tests-pod-network-test-r6lg4 deletion completed in 24.363604589s

• [SLOW TEST:58.274 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 11:59:17.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb 12 11:59:17.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-b2wdm'
Feb 12 11:59:17.825: INFO: stderr: ""
Feb 12 11:59:17.825: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 12 11:59:17.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-b2wdm'
Feb 12 11:59:18.145: INFO: stderr: ""
Feb 12 11:59:18.145: INFO: stdout: "update-demo-nautilus-jl698 update-demo-nautilus-z6tbp "
Feb 12 11:59:18.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jl698 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-b2wdm'
Feb 12 11:59:18.373: INFO: stderr: ""
Feb 12 11:59:18.373: INFO: stdout: ""
Feb 12 11:59:18.373: INFO: update-demo-nautilus-jl698 is created but not running
Feb 12 11:59:23.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-b2wdm'
Feb 12 11:59:23.561: INFO: stderr: ""
Feb 12 11:59:23.561: INFO: stdout: "update-demo-nautilus-jl698 update-demo-nautilus-z6tbp "
Feb 12 11:59:23.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jl698 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-b2wdm'
Feb 12 11:59:23.707: INFO: stderr: ""
Feb 12 11:59:23.707: INFO: stdout: ""
Feb 12 11:59:23.707: INFO: update-demo-nautilus-jl698 is created but not running
Feb 12 11:59:28.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-b2wdm'
Feb 12 11:59:28.841: INFO: stderr: ""
Feb 12 11:59:28.841: INFO: stdout: "update-demo-nautilus-jl698 update-demo-nautilus-z6tbp "
Feb 12 11:59:28.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jl698 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-b2wdm'
Feb 12 11:59:28.976: INFO: stderr: ""
Feb 12 11:59:28.976: INFO: stdout: ""
Feb 12 11:59:28.976: INFO: update-demo-nautilus-jl698 is created but not running
Feb 12 11:59:33.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-b2wdm'
Feb 12 11:59:34.219: INFO: stderr: ""
Feb 12 11:59:34.219: INFO: stdout: "update-demo-nautilus-jl698 update-demo-nautilus-z6tbp "
Feb 12 11:59:34.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jl698 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-b2wdm'
Feb 12 11:59:34.354: INFO: stderr: ""
Feb 12 11:59:34.354: INFO: stdout: "true"
Feb 12 11:59:34.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jl698 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-b2wdm'
Feb 12 11:59:34.469: INFO: stderr: ""
Feb 12 11:59:34.470: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 11:59:34.470: INFO: validating pod update-demo-nautilus-jl698
Feb 12 11:59:34.509: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 11:59:34.509: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 11:59:34.509: INFO: update-demo-nautilus-jl698 is verified up and running
Feb 12 11:59:34.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z6tbp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-b2wdm'
Feb 12 11:59:34.652: INFO: stderr: ""
Feb 12 11:59:34.653: INFO: stdout: "true"
Feb 12 11:59:34.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z6tbp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-b2wdm'
Feb 12 11:59:34.757: INFO: stderr: ""
Feb 12 11:59:34.757: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 11:59:34.757: INFO: validating pod update-demo-nautilus-z6tbp
Feb 12 11:59:34.765: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 11:59:34.765: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 11:59:34.765: INFO: update-demo-nautilus-z6tbp is verified up and running
STEP: using delete to clean up resources
Feb 12 11:59:34.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-b2wdm'
Feb 12 11:59:34.883: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 11:59:34.883: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 12 11:59:34.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-b2wdm'
Feb 12 11:59:36.666: INFO: stderr: "No resources found.\n"
Feb 12 11:59:36.666: INFO: stdout: ""
Feb 12 11:59:36.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-b2wdm -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 12 11:59:36.850: INFO: stderr: ""
Feb 12 11:59:36.850: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 11:59:36.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-b2wdm" for this suite.
Feb 12 12:00:01.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:00:01.182: INFO: namespace: e2e-tests-kubectl-b2wdm, resource: bindings, ignored listing per whitelist
Feb 12 12:00:01.212: INFO: namespace e2e-tests-kubectl-b2wdm deletion completed in 24.318466116s

• [SLOW TEST:44.148 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:00:01.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-32b705e3-4d8f-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume secrets
Feb 12 12:00:01.509: INFO: Waiting up to 5m0s for pod "pod-secrets-32b90df2-4d8f-11ea-b4b9-0242ac110005" in namespace "e2e-tests-secrets-98n2z" to be "success or failure"
Feb 12 12:00:01.679: INFO: Pod "pod-secrets-32b90df2-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 169.262516ms
Feb 12 12:00:04.146: INFO: Pod "pod-secrets-32b90df2-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.636354578s
Feb 12 12:00:06.220: INFO: Pod "pod-secrets-32b90df2-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.710374315s
Feb 12 12:00:08.838: INFO: Pod "pod-secrets-32b90df2-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.328766105s
Feb 12 12:00:10.850: INFO: Pod "pod-secrets-32b90df2-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.340169783s
Feb 12 12:00:12.873: INFO: Pod "pod-secrets-32b90df2-4d8f-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.363431087s
STEP: Saw pod success
Feb 12 12:00:12.873: INFO: Pod "pod-secrets-32b90df2-4d8f-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:00:12.887: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-32b90df2-4d8f-11ea-b4b9-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb 12 12:00:14.107: INFO: Waiting for pod pod-secrets-32b90df2-4d8f-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:00:14.116: INFO: Pod pod-secrets-32b90df2-4d8f-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:00:14.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-98n2z" for this suite.
Feb 12 12:00:20.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:00:20.223: INFO: namespace: e2e-tests-secrets-98n2z, resource: bindings, ignored listing per whitelist
Feb 12 12:00:20.380: INFO: namespace e2e-tests-secrets-98n2z deletion completed in 6.255643756s

• [SLOW TEST:19.168 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:00:20.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 12 12:00:20.909: INFO: Waiting up to 5m0s for pod "pod-3e46d7d9-4d8f-11ea-b4b9-0242ac110005" in namespace "e2e-tests-emptydir-29jl9" to be "success or failure"
Feb 12 12:00:20.943: INFO: Pod "pod-3e46d7d9-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.792256ms
Feb 12 12:00:22.956: INFO: Pod "pod-3e46d7d9-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046923752s
Feb 12 12:00:24.984: INFO: Pod "pod-3e46d7d9-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074918782s
Feb 12 12:00:26.997: INFO: Pod "pod-3e46d7d9-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088212542s
Feb 12 12:00:29.018: INFO: Pod "pod-3e46d7d9-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109017682s
Feb 12 12:00:31.116: INFO: Pod "pod-3e46d7d9-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.207041358s
Feb 12 12:00:33.135: INFO: Pod "pod-3e46d7d9-4d8f-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.226399033s
STEP: Saw pod success
Feb 12 12:00:33.135: INFO: Pod "pod-3e46d7d9-4d8f-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:00:33.146: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3e46d7d9-4d8f-11ea-b4b9-0242ac110005 container test-container: 
STEP: delete the pod
Feb 12 12:00:33.887: INFO: Waiting for pod pod-3e46d7d9-4d8f-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:00:33.908: INFO: Pod pod-3e46d7d9-4d8f-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:00:33.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-29jl9" for this suite.
Feb 12 12:00:39.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:00:40.202: INFO: namespace: e2e-tests-emptydir-29jl9, resource: bindings, ignored listing per whitelist
Feb 12 12:00:40.285: INFO: namespace e2e-tests-emptydir-29jl9 deletion completed in 6.370012739s

• [SLOW TEST:19.904 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:00:40.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0212 12:00:54.714530       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 12 12:00:54.715: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:00:54.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-hkzqc" for this suite.
Feb 12 12:01:15.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:01:15.423: INFO: namespace: e2e-tests-gc-hkzqc, resource: bindings, ignored listing per whitelist
Feb 12 12:01:15.563: INFO: namespace e2e-tests-gc-hkzqc deletion completed in 20.469620425s

• [SLOW TEST:35.277 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:01:15.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-dglxs.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dglxs.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-dglxs.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-dglxs.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dglxs.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-dglxs.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 12 12:01:31.841: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:31.849: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:31.863: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:31.887: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:31.900: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:31.905: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:31.912: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dglxs.svc.cluster.local from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:31.921: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:31.929: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:31.935: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:31.947: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:31.953: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:31.960: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:31.969: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:31.977: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:31.983: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:31.990: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dglxs.svc.cluster.local from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:31.995: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:31.999: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:32.022: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005: the server could not find the requested resource (get pods dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005)
Feb 12 12:01:32.022: INFO: Lookups using e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dglxs.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-dglxs.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 12 12:01:37.279: INFO: DNS probes using e2e-tests-dns-dglxs/dns-test-5efde0ad-4d8f-11ea-b4b9-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:01:37.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-dglxs" for this suite.
Feb 12 12:01:45.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:01:45.910: INFO: namespace: e2e-tests-dns-dglxs, resource: bindings, ignored listing per whitelist
Feb 12 12:01:45.919: INFO: namespace e2e-tests-dns-dglxs deletion completed in 8.495748293s

• [SLOW TEST:30.356 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:01:45.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 12 12:01:56.313: INFO: Waiting up to 5m0s for pod "client-envvars-771dac10-4d8f-11ea-b4b9-0242ac110005" in namespace "e2e-tests-pods-6m5z2" to be "success or failure"
Feb 12 12:01:56.354: INFO: Pod "client-envvars-771dac10-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.96053ms
Feb 12 12:01:58.584: INFO: Pod "client-envvars-771dac10-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270412222s
Feb 12 12:02:00.607: INFO: Pod "client-envvars-771dac10-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293694619s
Feb 12 12:02:02.765: INFO: Pod "client-envvars-771dac10-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.451307155s
Feb 12 12:02:04.804: INFO: Pod "client-envvars-771dac10-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.490647941s
Feb 12 12:02:06.825: INFO: Pod "client-envvars-771dac10-4d8f-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.5110715s
STEP: Saw pod success
Feb 12 12:02:06.825: INFO: Pod "client-envvars-771dac10-4d8f-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:02:06.831: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-771dac10-4d8f-11ea-b4b9-0242ac110005 container env3cont: 
STEP: delete the pod
Feb 12 12:02:06.961: INFO: Waiting for pod client-envvars-771dac10-4d8f-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:02:06.977: INFO: Pod client-envvars-771dac10-4d8f-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:02:06.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-6m5z2" for this suite.
Feb 12 12:02:55.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:02:55.262: INFO: namespace: e2e-tests-pods-6m5z2, resource: bindings, ignored listing per whitelist
Feb 12 12:02:55.280: INFO: namespace e2e-tests-pods-6m5z2 deletion completed in 48.289115461s

• [SLOW TEST:69.360 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:02:55.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-wz9xg
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-wz9xg to expose endpoints map[]
Feb 12 12:02:55.693: INFO: Get endpoints failed (33.795714ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb 12 12:02:56.706: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-wz9xg exposes endpoints map[] (1.046987743s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-wz9xg
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-wz9xg to expose endpoints map[pod1:[100]]
Feb 12 12:03:01.823: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.090917952s elapsed, will retry)
Feb 12 12:03:07.710: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (10.977912431s elapsed, will retry)
Feb 12 12:03:08.730: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-wz9xg exposes endpoints map[pod1:[100]] (11.998280953s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-wz9xg
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-wz9xg to expose endpoints map[pod1:[100] pod2:[101]]
Feb 12 12:03:14.139: INFO: Unexpected endpoints: found map[9b2ac260-4d8f-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (5.398478264s elapsed, will retry)
Feb 12 12:03:20.109: INFO: Unexpected endpoints: found map[9b2ac260-4d8f-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (11.368364535s elapsed, will retry)
Feb 12 12:03:21.132: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-wz9xg exposes endpoints map[pod1:[100] pod2:[101]] (12.391875879s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-wz9xg
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-wz9xg to expose endpoints map[pod2:[101]]
Feb 12 12:03:21.216: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-wz9xg exposes endpoints map[pod2:[101]] (42.286691ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-wz9xg
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-wz9xg to expose endpoints map[]
Feb 12 12:03:22.400: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-wz9xg exposes endpoints map[] (1.10938459s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:03:22.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-wz9xg" for this suite.
Feb 12 12:03:30.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:03:30.988: INFO: namespace: e2e-tests-services-wz9xg, resource: bindings, ignored listing per whitelist
Feb 12 12:03:31.009: INFO: namespace e2e-tests-services-wz9xg deletion completed in 8.266504248s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:35.728 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:03:31.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-afb2ceb7-4d8f-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume secrets
Feb 12 12:03:31.173: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-afb3660f-4d8f-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-d55gh" to be "success or failure"
Feb 12 12:03:31.184: INFO: Pod "pod-projected-secrets-afb3660f-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.217536ms
Feb 12 12:03:33.285: INFO: Pod "pod-projected-secrets-afb3660f-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11171747s
Feb 12 12:03:35.303: INFO: Pod "pod-projected-secrets-afb3660f-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12931717s
Feb 12 12:03:37.502: INFO: Pod "pod-projected-secrets-afb3660f-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.328766006s
Feb 12 12:03:39.521: INFO: Pod "pod-projected-secrets-afb3660f-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.348108557s
Feb 12 12:03:41.547: INFO: Pod "pod-projected-secrets-afb3660f-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.373420069s
Feb 12 12:03:43.948: INFO: Pod "pod-projected-secrets-afb3660f-4d8f-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.774747692s
STEP: Saw pod success
Feb 12 12:03:43.948: INFO: Pod "pod-projected-secrets-afb3660f-4d8f-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:03:44.398: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-afb3660f-4d8f-11ea-b4b9-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb 12 12:03:44.628: INFO: Waiting for pod pod-projected-secrets-afb3660f-4d8f-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:03:44.644: INFO: Pod pod-projected-secrets-afb3660f-4d8f-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:03:44.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-d55gh" for this suite.
Feb 12 12:03:50.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:03:51.012: INFO: namespace: e2e-tests-projected-d55gh, resource: bindings, ignored listing per whitelist
Feb 12 12:03:51.025: INFO: namespace e2e-tests-projected-d55gh deletion completed in 6.351509263s

• [SLOW TEST:20.016 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:03:51.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 12 12:03:51.306: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb 12 12:03:56.481: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 12 12:04:00.548: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb 12 12:04:02.574: INFO: Creating deployment "test-rollover-deployment"
Feb 12 12:04:02.723: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb 12 12:04:04.957: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb 12 12:04:05.384: INFO: Ensure that both replica sets have 1 created replica
Feb 12 12:04:05.494: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb 12 12:04:05.550: INFO: Updating deployment test-rollover-deployment
Feb 12 12:04:05.550: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb 12 12:04:08.066: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb 12 12:04:08.099: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb 12 12:04:08.136: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 12:04:08.137: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105846, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 12:04:10.162: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 12:04:10.162: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105846, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 12:04:12.172: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 12:04:12.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105846, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 12:04:16.188: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 12:04:16.188: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105846, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 12:04:18.162: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 12:04:18.162: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105846, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 12:04:20.183: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 12:04:20.183: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105859, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 12:04:22.182: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 12:04:22.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105859, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 12:04:24.160: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 12:04:24.160: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105859, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 12:04:26.176: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 12:04:26.176: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105859, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 12:04:28.162: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 12:04:28.162: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105859, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717105842, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 12:04:30.180: INFO: 
Feb 12 12:04:30.180: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 12 12:04:30.199: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-2wvrt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2wvrt/deployments/test-rollover-deployment,UID:c26d4686-4d8f-11ea-a994-fa163e34d433,ResourceVersion:21418067,Generation:2,CreationTimestamp:2020-02-12 12:04:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-12 12:04:02 +0000 UTC 2020-02-12 12:04:02 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-12 12:04:30 +0000 UTC 2020-02-12 12:04:02 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 12 12:04:30.293: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-2wvrt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2wvrt/replicasets/test-rollover-deployment-5b8479fdb6,UID:c433b238-4d8f-11ea-a994-fa163e34d433,ResourceVersion:21418058,Generation:2,CreationTimestamp:2020-02-12 12:04:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c26d4686-4d8f-11ea-a994-fa163e34d433 0xc001693f97 0xc001693f98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 12 12:04:30.293: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb 12 12:04:30.294: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-2wvrt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2wvrt/replicasets/test-rollover-controller,UID:bba5ba99-4d8f-11ea-a994-fa163e34d433,ResourceVersion:21418066,Generation:2,CreationTimestamp:2020-02-12 12:03:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c26d4686-4d8f-11ea-a994-fa163e34d433 0xc001693d5f 0xc001693d70}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 12 12:04:30.294: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-2wvrt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2wvrt/replicasets/test-rollover-deployment-58494b7559,UID:c287cec0-4d8f-11ea-a994-fa163e34d433,ResourceVersion:21418023,Generation:2,CreationTimestamp:2020-02-12 12:04:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c26d4686-4d8f-11ea-a994-fa163e34d433 0xc001693ec7 0xc001693ec8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 12 12:04:30.307: INFO: Pod "test-rollover-deployment-5b8479fdb6-jl4pq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-jl4pq,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-2wvrt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2wvrt/pods/test-rollover-deployment-5b8479fdb6-jl4pq,UID:c4a72e29-4d8f-11ea-a994-fa163e34d433,ResourceVersion:21418043,Generation:0,CreationTimestamp:2020-02-12 12:04:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 c433b238-4d8f-11ea-a994-fa163e34d433 0xc001c52837 0xc001c52838}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ql2x8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ql2x8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-ql2x8 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c528a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c52950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 12:04:07 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 12:04:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 12:04:19 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 12:04:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-12 12:04:07 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-12 12:04:18 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://bcb34c059f23aefeb1de485b20df22194e8b2a6154e9700fae747eae8818e821}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:04:30.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-2wvrt" for this suite.
Feb 12 12:04:38.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:04:38.586: INFO: namespace: e2e-tests-deployment-2wvrt, resource: bindings, ignored listing per whitelist
Feb 12 12:04:38.765: INFO: namespace e2e-tests-deployment-2wvrt deletion completed in 8.447993252s

• [SLOW TEST:47.739 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:04:38.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 12 12:04:40.208: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8afe7f5-4d8f-11ea-b4b9-0242ac110005" in namespace "e2e-tests-downward-api-6gkjh" to be "success or failure"
Feb 12 12:04:40.262: INFO: Pod "downwardapi-volume-d8afe7f5-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 53.696011ms
Feb 12 12:04:42.353: INFO: Pod "downwardapi-volume-d8afe7f5-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145318084s
Feb 12 12:04:44.375: INFO: Pod "downwardapi-volume-d8afe7f5-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167378588s
Feb 12 12:04:46.618: INFO: Pod "downwardapi-volume-d8afe7f5-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.409498651s
Feb 12 12:04:48.642: INFO: Pod "downwardapi-volume-d8afe7f5-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.433915905s
Feb 12 12:04:50.672: INFO: Pod "downwardapi-volume-d8afe7f5-4d8f-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.463719225s
STEP: Saw pod success
Feb 12 12:04:50.672: INFO: Pod "downwardapi-volume-d8afe7f5-4d8f-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:04:50.680: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d8afe7f5-4d8f-11ea-b4b9-0242ac110005 container client-container: 
STEP: delete the pod
Feb 12 12:04:50.775: INFO: Waiting for pod downwardapi-volume-d8afe7f5-4d8f-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:04:50.780: INFO: Pod downwardapi-volume-d8afe7f5-4d8f-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:04:50.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6gkjh" for this suite.
Feb 12 12:04:56.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:04:57.069: INFO: namespace: e2e-tests-downward-api-6gkjh, resource: bindings, ignored listing per whitelist
Feb 12 12:04:57.105: INFO: namespace e2e-tests-downward-api-6gkjh deletion completed in 6.234155255s

• [SLOW TEST:18.340 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:04:57.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-e30042ea-4d8f-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb 12 12:04:57.255: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e30121ee-4d8f-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-dg424" to be "success or failure"
Feb 12 12:04:57.267: INFO: Pod "pod-projected-configmaps-e30121ee-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.473454ms
Feb 12 12:04:59.681: INFO: Pod "pod-projected-configmaps-e30121ee-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.425554617s
Feb 12 12:05:01.715: INFO: Pod "pod-projected-configmaps-e30121ee-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.460069677s
Feb 12 12:05:04.223: INFO: Pod "pod-projected-configmaps-e30121ee-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.967234187s
Feb 12 12:05:06.237: INFO: Pod "pod-projected-configmaps-e30121ee-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.981908494s
Feb 12 12:05:08.249: INFO: Pod "pod-projected-configmaps-e30121ee-4d8f-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.993590513s
Feb 12 12:05:10.465: INFO: Pod "pod-projected-configmaps-e30121ee-4d8f-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.209288467s
STEP: Saw pod success
Feb 12 12:05:10.465: INFO: Pod "pod-projected-configmaps-e30121ee-4d8f-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:05:10.485: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-e30121ee-4d8f-11ea-b4b9-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 12 12:05:10.959: INFO: Waiting for pod pod-projected-configmaps-e30121ee-4d8f-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:05:10.997: INFO: Pod pod-projected-configmaps-e30121ee-4d8f-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:05:10.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dg424" for this suite.
Feb 12 12:05:17.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:05:17.212: INFO: namespace: e2e-tests-projected-dg424, resource: bindings, ignored listing per whitelist
Feb 12 12:05:17.459: INFO: namespace e2e-tests-projected-dg424 deletion completed in 6.435030216s

• [SLOW TEST:20.354 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:05:17.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 12 12:05:41.862: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 12 12:05:41.875: INFO: Pod pod-with-poststart-http-hook still exists
Feb 12 12:05:43.875: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 12 12:05:43.914: INFO: Pod pod-with-poststart-http-hook still exists
Feb 12 12:05:45.875: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 12 12:05:46.368: INFO: Pod pod-with-poststart-http-hook still exists
Feb 12 12:05:47.875: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 12 12:05:47.896: INFO: Pod pod-with-poststart-http-hook still exists
Feb 12 12:05:49.876: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 12 12:05:49.898: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:05:49.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-ckpcd" for this suite.
Feb 12 12:06:29.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:06:30.305: INFO: namespace: e2e-tests-container-lifecycle-hook-ckpcd, resource: bindings, ignored listing per whitelist
Feb 12 12:06:30.333: INFO: namespace e2e-tests-container-lifecycle-hook-ckpcd deletion completed in 40.426848297s

• [SLOW TEST:72.874 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:06:30.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 12 12:06:30.736: INFO: Waiting up to 5m0s for pod "pod-1aab40b9-4d90-11ea-b4b9-0242ac110005" in namespace "e2e-tests-emptydir-mm97z" to be "success or failure"
Feb 12 12:06:30.761: INFO: Pod "pod-1aab40b9-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.935514ms
Feb 12 12:06:32.782: INFO: Pod "pod-1aab40b9-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045082741s
Feb 12 12:06:34.821: INFO: Pod "pod-1aab40b9-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084430061s
Feb 12 12:06:37.173: INFO: Pod "pod-1aab40b9-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436186821s
Feb 12 12:06:39.190: INFO: Pod "pod-1aab40b9-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.452903971s
Feb 12 12:06:41.208: INFO: Pod "pod-1aab40b9-4d90-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.471141627s
STEP: Saw pod success
Feb 12 12:06:41.208: INFO: Pod "pod-1aab40b9-4d90-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:06:41.217: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1aab40b9-4d90-11ea-b4b9-0242ac110005 container test-container: 
STEP: delete the pod
Feb 12 12:06:41.475: INFO: Waiting for pod pod-1aab40b9-4d90-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:06:41.480: INFO: Pod pod-1aab40b9-4d90-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:06:41.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-mm97z" for this suite.
Feb 12 12:06:47.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:06:47.826: INFO: namespace: e2e-tests-emptydir-mm97z, resource: bindings, ignored listing per whitelist
Feb 12 12:06:47.979: INFO: namespace e2e-tests-emptydir-mm97z deletion completed in 6.480008192s

• [SLOW TEST:17.644 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:06:47.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-4hscj/secret-test-2525a860-4d90-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume secrets
Feb 12 12:06:48.235: INFO: Waiting up to 5m0s for pod "pod-configmaps-2526d35a-4d90-11ea-b4b9-0242ac110005" in namespace "e2e-tests-secrets-4hscj" to be "success or failure"
Feb 12 12:06:48.264: INFO: Pod "pod-configmaps-2526d35a-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.648288ms
Feb 12 12:06:50.675: INFO: Pod "pod-configmaps-2526d35a-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.439753548s
Feb 12 12:06:52.690: INFO: Pod "pod-configmaps-2526d35a-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.455123599s
Feb 12 12:06:54.775: INFO: Pod "pod-configmaps-2526d35a-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.539366566s
Feb 12 12:06:56.791: INFO: Pod "pod-configmaps-2526d35a-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.555392026s
Feb 12 12:06:58.798: INFO: Pod "pod-configmaps-2526d35a-4d90-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.563057748s
STEP: Saw pod success
Feb 12 12:06:58.798: INFO: Pod "pod-configmaps-2526d35a-4d90-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:06:58.802: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-2526d35a-4d90-11ea-b4b9-0242ac110005 container env-test: 
STEP: delete the pod
Feb 12 12:06:59.888: INFO: Waiting for pod pod-configmaps-2526d35a-4d90-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:07:00.015: INFO: Pod pod-configmaps-2526d35a-4d90-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:07:00.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4hscj" for this suite.
Feb 12 12:07:06.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:07:06.357: INFO: namespace: e2e-tests-secrets-4hscj, resource: bindings, ignored listing per whitelist
Feb 12 12:07:06.413: INFO: namespace e2e-tests-secrets-4hscj deletion completed in 6.360153743s

• [SLOW TEST:18.434 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:07:06.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 12 12:07:06.698: INFO: Waiting up to 5m0s for pod "pod-3029024d-4d90-11ea-b4b9-0242ac110005" in namespace "e2e-tests-emptydir-r8hwd" to be "success or failure"
Feb 12 12:07:06.714: INFO: Pod "pod-3029024d-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.988185ms
Feb 12 12:07:08.728: INFO: Pod "pod-3029024d-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029808182s
Feb 12 12:07:10.763: INFO: Pod "pod-3029024d-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065156492s
Feb 12 12:07:12.798: INFO: Pod "pod-3029024d-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099854155s
Feb 12 12:07:14.908: INFO: Pod "pod-3029024d-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.21042123s
Feb 12 12:07:16.937: INFO: Pod "pod-3029024d-4d90-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.238860992s
STEP: Saw pod success
Feb 12 12:07:16.937: INFO: Pod "pod-3029024d-4d90-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:07:17.034: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3029024d-4d90-11ea-b4b9-0242ac110005 container test-container: 
STEP: delete the pod
Feb 12 12:07:17.125: INFO: Waiting for pod pod-3029024d-4d90-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:07:17.279: INFO: Pod pod-3029024d-4d90-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:07:17.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-r8hwd" for this suite.
Feb 12 12:07:23.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:07:23.544: INFO: namespace: e2e-tests-emptydir-r8hwd, resource: bindings, ignored listing per whitelist
Feb 12 12:07:23.556: INFO: namespace e2e-tests-emptydir-r8hwd deletion completed in 6.259427125s

• [SLOW TEST:17.143 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:07:23.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-3a653e01-4d90-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb 12 12:07:23.983: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3a6e71fc-4d90-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-b4hj9" to be "success or failure"
Feb 12 12:07:24.037: INFO: Pod "pod-projected-configmaps-3a6e71fc-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 53.935561ms
Feb 12 12:07:26.051: INFO: Pod "pod-projected-configmaps-3a6e71fc-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068242474s
Feb 12 12:07:28.069: INFO: Pod "pod-projected-configmaps-3a6e71fc-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085832775s
Feb 12 12:07:30.254: INFO: Pod "pod-projected-configmaps-3a6e71fc-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.271101955s
Feb 12 12:07:32.268: INFO: Pod "pod-projected-configmaps-3a6e71fc-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.285013362s
Feb 12 12:07:34.295: INFO: Pod "pod-projected-configmaps-3a6e71fc-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.311505841s
Feb 12 12:07:36.583: INFO: Pod "pod-projected-configmaps-3a6e71fc-4d90-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.599488702s
STEP: Saw pod success
Feb 12 12:07:36.583: INFO: Pod "pod-projected-configmaps-3a6e71fc-4d90-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:07:36.601: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-3a6e71fc-4d90-11ea-b4b9-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 12 12:07:37.718: INFO: Waiting for pod pod-projected-configmaps-3a6e71fc-4d90-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:07:37.953: INFO: Pod pod-projected-configmaps-3a6e71fc-4d90-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:07:37.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b4hj9" for this suite.
Feb 12 12:07:46.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:07:46.259: INFO: namespace: e2e-tests-projected-b4hj9, resource: bindings, ignored listing per whitelist
Feb 12 12:07:46.299: INFO: namespace e2e-tests-projected-b4hj9 deletion completed in 8.333045205s

• [SLOW TEST:22.742 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:07:46.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0212 12:07:49.124891       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 12 12:07:49.125: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:07:49.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-mkhqh" for this suite.
Feb 12 12:07:55.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:07:56.008: INFO: namespace: e2e-tests-gc-mkhqh, resource: bindings, ignored listing per whitelist
Feb 12 12:07:56.095: INFO: namespace e2e-tests-gc-mkhqh deletion completed in 6.96455231s

• [SLOW TEST:9.796 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:07:56.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Feb 12 12:07:56.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-j589c'
Feb 12 12:07:59.213: INFO: stderr: ""
Feb 12 12:07:59.213: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Feb 12 12:08:00.227: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:08:00.227: INFO: Found 0 / 1
Feb 12 12:08:01.626: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:08:01.626: INFO: Found 0 / 1
Feb 12 12:08:02.244: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:08:02.244: INFO: Found 0 / 1
Feb 12 12:08:03.227: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:08:03.227: INFO: Found 0 / 1
Feb 12 12:08:04.570: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:08:04.570: INFO: Found 0 / 1
Feb 12 12:08:05.284: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:08:05.284: INFO: Found 0 / 1
Feb 12 12:08:06.351: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:08:06.351: INFO: Found 0 / 1
Feb 12 12:08:07.229: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:08:07.229: INFO: Found 0 / 1
Feb 12 12:08:08.235: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:08:08.235: INFO: Found 0 / 1
Feb 12 12:08:09.228: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:08:09.229: INFO: Found 1 / 1
Feb 12 12:08:09.229: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 12 12:08:09.240: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:08:09.240: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb 12 12:08:09.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-hm6x6 redis-master --namespace=e2e-tests-kubectl-j589c'
Feb 12 12:08:09.447: INFO: stderr: ""
Feb 12 12:08:09.447: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 12 Feb 12:08:06.993 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 Feb 12:08:06.993 # Server started, Redis version 3.2.12\n1:M 12 Feb 12:08:06.993 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 Feb 12:08:06.994 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb 12 12:08:09.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-hm6x6 redis-master --namespace=e2e-tests-kubectl-j589c --tail=1'
Feb 12 12:08:09.627: INFO: stderr: ""
Feb 12 12:08:09.627: INFO: stdout: "1:M 12 Feb 12:08:06.994 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb 12 12:08:09.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-hm6x6 redis-master --namespace=e2e-tests-kubectl-j589c --limit-bytes=1'
Feb 12 12:08:09.832: INFO: stderr: ""
Feb 12 12:08:09.832: INFO: stdout: " "
STEP: exposing timestamps
Feb 12 12:08:09.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-hm6x6 redis-master --namespace=e2e-tests-kubectl-j589c --tail=1 --timestamps'
Feb 12 12:08:10.002: INFO: stderr: ""
Feb 12 12:08:10.002: INFO: stdout: "2020-02-12T12:08:06.994903752Z 1:M 12 Feb 12:08:06.994 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb 12 12:08:12.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-hm6x6 redis-master --namespace=e2e-tests-kubectl-j589c --since=1s'
Feb 12 12:08:12.774: INFO: stderr: ""
Feb 12 12:08:12.774: INFO: stdout: ""
Feb 12 12:08:12.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-hm6x6 redis-master --namespace=e2e-tests-kubectl-j589c --since=24h'
Feb 12 12:08:12.978: INFO: stderr: ""
Feb 12 12:08:12.978: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 12 Feb 12:08:06.993 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 Feb 12:08:06.993 # Server started, Redis version 3.2.12\n1:M 12 Feb 12:08:06.993 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 Feb 12:08:06.994 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Feb 12 12:08:12.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-j589c'
Feb 12 12:08:13.147: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 12:08:13.147: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb 12 12:08:13.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-j589c'
Feb 12 12:08:13.403: INFO: stderr: "No resources found.\n"
Feb 12 12:08:13.403: INFO: stdout: ""
Feb 12 12:08:13.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-j589c -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 12 12:08:13.530: INFO: stderr: ""
Feb 12 12:08:13.530: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:08:13.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-j589c" for this suite.
Feb 12 12:08:36.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:08:36.697: INFO: namespace: e2e-tests-kubectl-j589c, resource: bindings, ignored listing per whitelist
Feb 12 12:08:36.742: INFO: namespace e2e-tests-kubectl-j589c deletion completed in 23.202781567s

• [SLOW TEST:40.646 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:08:36.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Feb 12 12:08:36.870: INFO: Waiting up to 5m0s for pod "client-containers-65e71cb6-4d90-11ea-b4b9-0242ac110005" in namespace "e2e-tests-containers-z29gt" to be "success or failure"
Feb 12 12:08:36.933: INFO: Pod "client-containers-65e71cb6-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 62.611904ms
Feb 12 12:08:39.007: INFO: Pod "client-containers-65e71cb6-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137137323s
Feb 12 12:08:41.324: INFO: Pod "client-containers-65e71cb6-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.454175749s
Feb 12 12:08:43.335: INFO: Pod "client-containers-65e71cb6-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.464928419s
Feb 12 12:08:46.046: INFO: Pod "client-containers-65e71cb6-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.176182755s
Feb 12 12:08:48.063: INFO: Pod "client-containers-65e71cb6-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.192358899s
Feb 12 12:08:50.208: INFO: Pod "client-containers-65e71cb6-4d90-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.337770593s
STEP: Saw pod success
Feb 12 12:08:50.208: INFO: Pod "client-containers-65e71cb6-4d90-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:08:50.296: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-65e71cb6-4d90-11ea-b4b9-0242ac110005 container test-container: 
STEP: delete the pod
Feb 12 12:08:50.537: INFO: Waiting for pod client-containers-65e71cb6-4d90-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:08:50.559: INFO: Pod client-containers-65e71cb6-4d90-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:08:50.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-z29gt" for this suite.
Feb 12 12:08:56.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:08:56.822: INFO: namespace: e2e-tests-containers-z29gt, resource: bindings, ignored listing per whitelist
Feb 12 12:08:56.843: INFO: namespace e2e-tests-containers-z29gt deletion completed in 6.27495438s

• [SLOW TEST:20.101 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:08:56.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Feb 12 12:09:07.105: INFO: Pod pod-hostip-71f2cc58-4d90-11ea-b4b9-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:09:07.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-gsc7w" for this suite.
Feb 12 12:09:31.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:09:31.263: INFO: namespace: e2e-tests-pods-gsc7w, resource: bindings, ignored listing per whitelist
Feb 12 12:09:31.335: INFO: namespace e2e-tests-pods-gsc7w deletion completed in 24.223530107s

• [SLOW TEST:34.491 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:09:31.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 12 12:09:49.729: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 12:09:49.749: INFO: Pod pod-with-prestop-http-hook still exists
Feb 12 12:09:51.750: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 12:09:51.764: INFO: Pod pod-with-prestop-http-hook still exists
Feb 12 12:09:53.750: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 12:09:53.779: INFO: Pod pod-with-prestop-http-hook still exists
Feb 12 12:09:55.750: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 12:09:55.767: INFO: Pod pod-with-prestop-http-hook still exists
Feb 12 12:09:57.749: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 12:09:57.768: INFO: Pod pod-with-prestop-http-hook still exists
Feb 12 12:09:59.750: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 12:09:59.768: INFO: Pod pod-with-prestop-http-hook still exists
Feb 12 12:10:01.750: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 12:10:01.770: INFO: Pod pod-with-prestop-http-hook still exists
Feb 12 12:10:03.750: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 12:10:03.769: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:10:03.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-gztf6" for this suite.
Feb 12 12:10:29.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:10:29.973: INFO: namespace: e2e-tests-container-lifecycle-hook-gztf6, resource: bindings, ignored listing per whitelist
Feb 12 12:10:30.080: INFO: namespace e2e-tests-container-lifecycle-hook-gztf6 deletion completed in 26.234413946s

• [SLOW TEST:58.744 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:10:30.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-a98893d2-4d90-11ea-b4b9-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-a9889466-4d90-11ea-b4b9-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-a98893d2-4d90-11ea-b4b9-0242ac110005
STEP: Updating configmap cm-test-opt-upd-a9889466-4d90-11ea-b4b9-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-a98894af-4d90-11ea-b4b9-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:11:52.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4xn5w" for this suite.
Feb 12 12:12:17.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:12:17.247: INFO: namespace: e2e-tests-projected-4xn5w, resource: bindings, ignored listing per whitelist
Feb 12 12:12:17.313: INFO: namespace e2e-tests-projected-4xn5w deletion completed in 24.498991656s

• [SLOW TEST:107.233 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:12:17.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 12 12:12:17.646: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e96c679d-4d90-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-hcb68" to be "success or failure"
Feb 12 12:12:17.667: INFO: Pod "downwardapi-volume-e96c679d-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.791003ms
Feb 12 12:12:19.684: INFO: Pod "downwardapi-volume-e96c679d-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037894457s
Feb 12 12:12:21.705: INFO: Pod "downwardapi-volume-e96c679d-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059231555s
Feb 12 12:12:23.727: INFO: Pod "downwardapi-volume-e96c679d-4d90-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080655892s
Feb 12 12:12:25.802: INFO: Pod "downwardapi-volume-e96c679d-4d90-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.156232705s
STEP: Saw pod success
Feb 12 12:12:25.803: INFO: Pod "downwardapi-volume-e96c679d-4d90-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:12:25.830: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e96c679d-4d90-11ea-b4b9-0242ac110005 container client-container: 
STEP: delete the pod
Feb 12 12:12:25.980: INFO: Waiting for pod downwardapi-volume-e96c679d-4d90-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:12:26.018: INFO: Pod downwardapi-volume-e96c679d-4d90-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:12:26.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hcb68" for this suite.
Feb 12 12:12:32.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:12:32.228: INFO: namespace: e2e-tests-projected-hcb68, resource: bindings, ignored listing per whitelist
Feb 12 12:12:32.237: INFO: namespace e2e-tests-projected-hcb68 deletion completed in 6.206302545s

• [SLOW TEST:14.922 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:12:32.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-f256311c-4d90-11ea-b4b9-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:12:42.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-h9xqt" for this suite.
Feb 12 12:13:06.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:13:06.784: INFO: namespace: e2e-tests-configmap-h9xqt, resource: bindings, ignored listing per whitelist
Feb 12 12:13:06.924: INFO: namespace e2e-tests-configmap-h9xqt deletion completed in 24.236050186s

• [SLOW TEST:34.686 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:13:06.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0212 12:13:17.227270       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 12 12:13:17.227: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:13:17.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-ncsxp" for this suite.
Feb 12 12:13:25.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:13:25.724: INFO: namespace: e2e-tests-gc-ncsxp, resource: bindings, ignored listing per whitelist
Feb 12 12:13:25.724: INFO: namespace e2e-tests-gc-ncsxp deletion completed in 8.491112156s

• [SLOW TEST:18.799 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:13:25.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-vh89n
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-vh89n
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-vh89n
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-vh89n
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-vh89n
Feb 12 12:13:38.540: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-vh89n, name: ss-0, uid: 17442169-4d91-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Feb 12 12:13:42.518: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-vh89n, name: ss-0, uid: 17442169-4d91-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb 12 12:13:42.683: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-vh89n, name: ss-0, uid: 17442169-4d91-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb 12 12:13:42.712: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-vh89n
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-vh89n
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-vh89n and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 12 12:13:53.148: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vh89n
Feb 12 12:13:53.156: INFO: Scaling statefulset ss to 0
Feb 12 12:14:03.191: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 12:14:03.203: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:14:03.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-vh89n" for this suite.
Feb 12 12:14:11.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:14:11.461: INFO: namespace: e2e-tests-statefulset-vh89n, resource: bindings, ignored listing per whitelist
Feb 12 12:14:11.687: INFO: namespace e2e-tests-statefulset-vh89n deletion completed in 8.394432935s

• [SLOW TEST:45.963 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:14:11.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:14:20.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-d4cqh" for this suite.
Feb 12 12:15:04.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:15:04.202: INFO: namespace: e2e-tests-kubelet-test-d4cqh, resource: bindings, ignored listing per whitelist
Feb 12 12:15:04.444: INFO: namespace e2e-tests-kubelet-test-d4cqh deletion completed in 44.31600834s

• [SLOW TEST:52.756 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:15:04.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-4d375af5-4d91-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb 12 12:15:04.955: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4d397b3d-4d91-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-5z28t" to be "success or failure"
Feb 12 12:15:04.966: INFO: Pod "pod-projected-configmaps-4d397b3d-4d91-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.786716ms
Feb 12 12:15:07.310: INFO: Pod "pod-projected-configmaps-4d397b3d-4d91-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.354639726s
Feb 12 12:15:09.328: INFO: Pod "pod-projected-configmaps-4d397b3d-4d91-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.372213729s
Feb 12 12:15:11.352: INFO: Pod "pod-projected-configmaps-4d397b3d-4d91-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.396621395s
Feb 12 12:15:13.376: INFO: Pod "pod-projected-configmaps-4d397b3d-4d91-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.420262039s
STEP: Saw pod success
Feb 12 12:15:13.376: INFO: Pod "pod-projected-configmaps-4d397b3d-4d91-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:15:13.381: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-4d397b3d-4d91-11ea-b4b9-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 12 12:15:13.471: INFO: Waiting for pod pod-projected-configmaps-4d397b3d-4d91-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:15:13.521: INFO: Pod pod-projected-configmaps-4d397b3d-4d91-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:15:13.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5z28t" for this suite.
Feb 12 12:15:19.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:15:19.599: INFO: namespace: e2e-tests-projected-5z28t, resource: bindings, ignored listing per whitelist
Feb 12 12:15:19.746: INFO: namespace e2e-tests-projected-5z28t deletion completed in 6.217074335s

• [SLOW TEST:15.302 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:15:19.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 12 12:15:28.073: INFO: 10 pods remaining
Feb 12 12:15:28.073: INFO: 10 pods has nil DeletionTimestamp
Feb 12 12:15:28.073: INFO: 
Feb 12 12:15:29.743: INFO: 9 pods remaining
Feb 12 12:15:29.744: INFO: 6 pods has nil DeletionTimestamp
Feb 12 12:15:29.744: INFO: 
Feb 12 12:15:31.091: INFO: 6 pods remaining
Feb 12 12:15:31.091: INFO: 0 pods has nil DeletionTimestamp
Feb 12 12:15:31.091: INFO: 
Feb 12 12:15:32.263: INFO: 0 pods remaining
Feb 12 12:15:32.263: INFO: 0 pods has nil DeletionTimestamp
Feb 12 12:15:32.263: INFO: 
Feb 12 12:15:32.556: INFO: 0 pods remaining
Feb 12 12:15:32.556: INFO: 0 pods has nil DeletionTimestamp
Feb 12 12:15:32.556: INFO: 
STEP: Gathering metrics
W0212 12:15:33.484990       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 12 12:15:33.485: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:15:33.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-8n8dp" for this suite.
Feb 12 12:15:47.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:15:48.189: INFO: namespace: e2e-tests-gc-8n8dp, resource: bindings, ignored listing per whitelist
Feb 12 12:15:48.217: INFO: namespace e2e-tests-gc-8n8dp deletion completed in 14.721489491s

• [SLOW TEST:28.470 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:15:48.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 12 12:15:48.381: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6717145a-4d91-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-4ftp9" to be "success or failure"
Feb 12 12:15:48.411: INFO: Pod "downwardapi-volume-6717145a-4d91-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.037817ms
Feb 12 12:15:50.425: INFO: Pod "downwardapi-volume-6717145a-4d91-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043653916s
Feb 12 12:15:52.444: INFO: Pod "downwardapi-volume-6717145a-4d91-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062366244s
Feb 12 12:15:54.477: INFO: Pod "downwardapi-volume-6717145a-4d91-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095250919s
Feb 12 12:15:56.520: INFO: Pod "downwardapi-volume-6717145a-4d91-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.138704093s
STEP: Saw pod success
Feb 12 12:15:56.521: INFO: Pod "downwardapi-volume-6717145a-4d91-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:15:56.535: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6717145a-4d91-11ea-b4b9-0242ac110005 container client-container: 
STEP: delete the pod
Feb 12 12:15:57.321: INFO: Waiting for pod downwardapi-volume-6717145a-4d91-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:15:57.344: INFO: Pod downwardapi-volume-6717145a-4d91-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:15:57.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4ftp9" for this suite.
Feb 12 12:16:03.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:16:03.761: INFO: namespace: e2e-tests-projected-4ftp9, resource: bindings, ignored listing per whitelist
Feb 12 12:16:03.899: INFO: namespace e2e-tests-projected-4ftp9 deletion completed in 6.545998755s

• [SLOW TEST:15.681 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:16:03.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 12 12:16:04.229: INFO: Waiting up to 5m0s for pod "downwardapi-volume-708c2aa3-4d91-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-99jk8" to be "success or failure"
Feb 12 12:16:04.287: INFO: Pod "downwardapi-volume-708c2aa3-4d91-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 57.132974ms
Feb 12 12:16:06.308: INFO: Pod "downwardapi-volume-708c2aa3-4d91-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078543771s
Feb 12 12:16:08.323: INFO: Pod "downwardapi-volume-708c2aa3-4d91-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093792525s
Feb 12 12:16:10.349: INFO: Pod "downwardapi-volume-708c2aa3-4d91-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11906617s
Feb 12 12:16:12.363: INFO: Pod "downwardapi-volume-708c2aa3-4d91-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.132940447s
STEP: Saw pod success
Feb 12 12:16:12.363: INFO: Pod "downwardapi-volume-708c2aa3-4d91-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:16:12.368: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-708c2aa3-4d91-11ea-b4b9-0242ac110005 container client-container: 
STEP: delete the pod
Feb 12 12:16:12.450: INFO: Waiting for pod downwardapi-volume-708c2aa3-4d91-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:16:12.485: INFO: Pod downwardapi-volume-708c2aa3-4d91-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:16:12.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-99jk8" for this suite.
Feb 12 12:16:18.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:16:18.874: INFO: namespace: e2e-tests-projected-99jk8, resource: bindings, ignored listing per whitelist
Feb 12 12:16:18.935: INFO: namespace e2e-tests-projected-99jk8 deletion completed in 6.428567389s

• [SLOW TEST:15.035 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:16:18.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0212 12:17:00.590293       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 12 12:17:00.590: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:17:00.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-tfc9h" for this suite.
Feb 12 12:17:20.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:17:21.054: INFO: namespace: e2e-tests-gc-tfc9h, resource: bindings, ignored listing per whitelist
Feb 12 12:17:21.065: INFO: namespace e2e-tests-gc-tfc9h deletion completed in 20.461847209s

• [SLOW TEST:62.130 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:17:21.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Feb 12 12:17:23.270: INFO: Waiting up to 5m0s for pod "var-expansion-9f641782-4d91-11ea-b4b9-0242ac110005" in namespace "e2e-tests-var-expansion-zsfqw" to be "success or failure"
Feb 12 12:17:23.287: INFO: Pod "var-expansion-9f641782-4d91-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.838181ms
Feb 12 12:17:25.405: INFO: Pod "var-expansion-9f641782-4d91-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135641932s
Feb 12 12:17:27.425: INFO: Pod "var-expansion-9f641782-4d91-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155772863s
Feb 12 12:17:29.452: INFO: Pod "var-expansion-9f641782-4d91-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182862069s
Feb 12 12:17:31.567: INFO: Pod "var-expansion-9f641782-4d91-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.297267967s
Feb 12 12:17:33.587: INFO: Pod "var-expansion-9f641782-4d91-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.317037807s
Feb 12 12:17:35.775: INFO: Pod "var-expansion-9f641782-4d91-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.505773256s
STEP: Saw pod success
Feb 12 12:17:35.776: INFO: Pod "var-expansion-9f641782-4d91-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:17:35.825: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-9f641782-4d91-11ea-b4b9-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb 12 12:17:35.964: INFO: Waiting for pod var-expansion-9f641782-4d91-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:17:35.985: INFO: Pod var-expansion-9f641782-4d91-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:17:35.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-zsfqw" for this suite.
Feb 12 12:17:42.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:17:42.302: INFO: namespace: e2e-tests-var-expansion-zsfqw, resource: bindings, ignored listing per whitelist
Feb 12 12:17:42.314: INFO: namespace e2e-tests-var-expansion-zsfqw deletion completed in 6.317537649s

• [SLOW TEST:21.247 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:17:42.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:18:42.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-szljp" for this suite.
Feb 12 12:19:07.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:19:08.303: INFO: namespace: e2e-tests-container-probe-szljp, resource: bindings, ignored listing per whitelist
Feb 12 12:19:08.324: INFO: namespace e2e-tests-container-probe-szljp deletion completed in 25.496348719s

• [SLOW TEST:86.010 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:19:08.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-vsxpf
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Feb 12 12:19:08.901: INFO: Found 0 stateful pods, waiting for 3
Feb 12 12:19:18.979: INFO: Found 1 stateful pods, waiting for 3
Feb 12 12:19:29.112: INFO: Found 2 stateful pods, waiting for 3
Feb 12 12:19:38.919: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 12:19:38.919: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 12:19:38.919: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 12 12:19:48.910: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 12:19:48.910: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 12:19:48.910: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 12 12:19:48.951: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 12 12:19:59.021: INFO: Updating stateful set ss2
Feb 12 12:19:59.055: INFO: Waiting for Pod e2e-tests-statefulset-vsxpf/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb 12 12:20:10.404: INFO: Found 2 stateful pods, waiting for 3
Feb 12 12:20:20.426: INFO: Found 2 stateful pods, waiting for 3
Feb 12 12:20:30.422: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 12:20:30.422: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 12:20:30.422: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 12 12:20:30.482: INFO: Updating stateful set ss2
Feb 12 12:20:30.522: INFO: Waiting for Pod e2e-tests-statefulset-vsxpf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 12 12:20:40.607: INFO: Waiting for Pod e2e-tests-statefulset-vsxpf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 12 12:20:50.615: INFO: Updating stateful set ss2
Feb 12 12:20:50.692: INFO: Waiting for StatefulSet e2e-tests-statefulset-vsxpf/ss2 to complete update
Feb 12 12:20:50.692: INFO: Waiting for Pod e2e-tests-statefulset-vsxpf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 12 12:21:00.733: INFO: Waiting for StatefulSet e2e-tests-statefulset-vsxpf/ss2 to complete update
Feb 12 12:21:00.733: INFO: Waiting for Pod e2e-tests-statefulset-vsxpf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 12 12:21:10.997: INFO: Waiting for StatefulSet e2e-tests-statefulset-vsxpf/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 12 12:21:20.718: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vsxpf
Feb 12 12:21:20.725: INFO: Scaling statefulset ss2 to 0
Feb 12 12:21:50.766: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 12:21:50.781: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:21:50.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-vsxpf" for this suite.
Feb 12 12:21:58.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:21:59.031: INFO: namespace: e2e-tests-statefulset-vsxpf, resource: bindings, ignored listing per whitelist
Feb 12 12:21:59.117: INFO: namespace e2e-tests-statefulset-vsxpf deletion completed in 8.27984923s

• [SLOW TEST:170.793 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:21:59.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 12 12:25:02.209: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 12:25:02.312: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 12:25:04.312: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 12:25:04.331: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 12:25:06.312: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 12:25:06.385: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 12:25:08.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 12:25:08.332: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 12:25:10.312: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 12:25:10.328: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 12:25:12.312: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 12:25:12.328: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 12:25:14.312: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 12:25:14.323: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 12:25:16.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 12:25:16.335: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 12:25:18.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 12:25:18.330: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 12:25:20.312: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 12:25:20.332: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 12:25:22.312: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 12:25:22.336: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 12:25:24.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 12:25:24.329: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 12:25:26.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 12:25:26.387: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 12:25:28.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 12:25:28.365: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 12:25:30.318: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 12:25:30.328: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 12:25:32.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 12:25:32.330: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 12:25:34.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 12:25:34.348: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 12:25:36.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 12:25:36.332: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 12:25:38.313: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 12:25:38.337: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:25:38.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-mmpbk" for this suite.
Feb 12 12:26:06.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:26:06.539: INFO: namespace: e2e-tests-container-lifecycle-hook-mmpbk, resource: bindings, ignored listing per whitelist
Feb 12 12:26:06.634: INFO: namespace e2e-tests-container-lifecycle-hook-mmpbk deletion completed in 28.28535577s

• [SLOW TEST:247.516 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:26:06.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Feb 12 12:26:17.182: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:26:59.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-hqv7m" for this suite.
Feb 12 12:27:05.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:27:05.655: INFO: namespace: e2e-tests-namespaces-hqv7m, resource: bindings, ignored listing per whitelist
Feb 12 12:27:05.656: INFO: namespace e2e-tests-namespaces-hqv7m deletion completed in 6.26575482s
STEP: Destroying namespace "e2e-tests-nsdeletetest-xkpzl" for this suite.
Feb 12 12:27:05.661: INFO: Namespace e2e-tests-nsdeletetest-xkpzl was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-jq472" for this suite.
Feb 12 12:27:11.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:27:11.861: INFO: namespace: e2e-tests-nsdeletetest-jq472, resource: bindings, ignored listing per whitelist
Feb 12 12:27:11.896: INFO: namespace e2e-tests-nsdeletetest-jq472 deletion completed in 6.234554326s

• [SLOW TEST:65.262 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:27:11.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-feb927b7-4d92-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume secrets
Feb 12 12:27:12.329: INFO: Waiting up to 5m0s for pod "pod-secrets-febee8b3-4d92-11ea-b4b9-0242ac110005" in namespace "e2e-tests-secrets-kpgcl" to be "success or failure"
Feb 12 12:27:12.341: INFO: Pod "pod-secrets-febee8b3-4d92-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.701567ms
Feb 12 12:27:14.445: INFO: Pod "pod-secrets-febee8b3-4d92-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115639879s
Feb 12 12:27:16.460: INFO: Pod "pod-secrets-febee8b3-4d92-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130705594s
Feb 12 12:27:18.489: INFO: Pod "pod-secrets-febee8b3-4d92-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159219589s
Feb 12 12:27:20.515: INFO: Pod "pod-secrets-febee8b3-4d92-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.185347671s
Feb 12 12:27:22.542: INFO: Pod "pod-secrets-febee8b3-4d92-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.212505956s
STEP: Saw pod success
Feb 12 12:27:22.542: INFO: Pod "pod-secrets-febee8b3-4d92-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:27:22.562: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-febee8b3-4d92-11ea-b4b9-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb 12 12:27:22.626: INFO: Waiting for pod pod-secrets-febee8b3-4d92-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:27:22.631: INFO: Pod pod-secrets-febee8b3-4d92-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:27:22.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-kpgcl" for this suite.
Feb 12 12:27:28.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:27:28.940: INFO: namespace: e2e-tests-secrets-kpgcl, resource: bindings, ignored listing per whitelist
Feb 12 12:27:28.945: INFO: namespace e2e-tests-secrets-kpgcl deletion completed in 6.307410989s

• [SLOW TEST:17.049 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:27:28.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-08cc8db8-4d93-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb 12 12:27:29.164: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-08cd8771-4d93-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-8h9qt" to be "success or failure"
Feb 12 12:27:29.208: INFO: Pod "pod-projected-configmaps-08cd8771-4d93-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 44.030305ms
Feb 12 12:27:31.265: INFO: Pod "pod-projected-configmaps-08cd8771-4d93-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101717734s
Feb 12 12:27:33.283: INFO: Pod "pod-projected-configmaps-08cd8771-4d93-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119553426s
Feb 12 12:27:35.419: INFO: Pod "pod-projected-configmaps-08cd8771-4d93-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.255509651s
Feb 12 12:27:37.432: INFO: Pod "pod-projected-configmaps-08cd8771-4d93-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.26843981s
Feb 12 12:27:39.450: INFO: Pod "pod-projected-configmaps-08cd8771-4d93-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.286391855s
STEP: Saw pod success
Feb 12 12:27:39.450: INFO: Pod "pod-projected-configmaps-08cd8771-4d93-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:27:39.456: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-08cd8771-4d93-11ea-b4b9-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 12 12:27:39.802: INFO: Waiting for pod pod-projected-configmaps-08cd8771-4d93-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:27:40.030: INFO: Pod pod-projected-configmaps-08cd8771-4d93-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:27:40.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8h9qt" for this suite.
Feb 12 12:27:46.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:27:46.285: INFO: namespace: e2e-tests-projected-8h9qt, resource: bindings, ignored listing per whitelist
Feb 12 12:27:46.332: INFO: namespace e2e-tests-projected-8h9qt deletion completed in 6.289534395s

• [SLOW TEST:17.387 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:27:46.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 12 12:27:46.577: INFO: Waiting up to 5m0s for pod "pod-132d79a3-4d93-11ea-b4b9-0242ac110005" in namespace "e2e-tests-emptydir-nvnwf" to be "success or failure"
Feb 12 12:27:46.608: INFO: Pod "pod-132d79a3-4d93-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.081695ms
Feb 12 12:27:48.671: INFO: Pod "pod-132d79a3-4d93-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093298997s
Feb 12 12:27:50.690: INFO: Pod "pod-132d79a3-4d93-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112673163s
Feb 12 12:27:52.872: INFO: Pod "pod-132d79a3-4d93-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.294687397s
Feb 12 12:27:54.916: INFO: Pod "pod-132d79a3-4d93-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.338184478s
STEP: Saw pod success
Feb 12 12:27:54.916: INFO: Pod "pod-132d79a3-4d93-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:27:54.931: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-132d79a3-4d93-11ea-b4b9-0242ac110005 container test-container: 
STEP: delete the pod
Feb 12 12:27:55.210: INFO: Waiting for pod pod-132d79a3-4d93-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:27:55.219: INFO: Pod pod-132d79a3-4d93-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:27:55.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-nvnwf" for this suite.
Feb 12 12:28:01.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:28:01.684: INFO: namespace: e2e-tests-emptydir-nvnwf, resource: bindings, ignored listing per whitelist
Feb 12 12:28:01.701: INFO: namespace e2e-tests-emptydir-nvnwf deletion completed in 6.469894095s

• [SLOW TEST:15.369 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:28:01.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-w2pbx
I0212 12:28:02.076634       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-w2pbx, replica count: 1
I0212 12:28:03.127682       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 12:28:04.128327       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 12:28:05.129040       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 12:28:06.129588       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 12:28:07.130133       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 12:28:08.130901       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 12:28:09.131546       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 12:28:10.132454       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 12:28:11.133054       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 12 12:28:11.286: INFO: Created: latency-svc-jtdfx
Feb 12 12:28:11.317: INFO: Got endpoints: latency-svc-jtdfx [83.356921ms]
Feb 12 12:28:11.511: INFO: Created: latency-svc-7tdn5
Feb 12 12:28:11.633: INFO: Got endpoints: latency-svc-7tdn5 [314.645794ms]
Feb 12 12:28:11.667: INFO: Created: latency-svc-vdpq5
Feb 12 12:28:11.697: INFO: Got endpoints: latency-svc-vdpq5 [375.694118ms]
Feb 12 12:28:11.876: INFO: Created: latency-svc-lgndh
Feb 12 12:28:12.069: INFO: Got endpoints: latency-svc-lgndh [749.452963ms]
Feb 12 12:28:12.099: INFO: Created: latency-svc-prj2c
Feb 12 12:28:12.108: INFO: Got endpoints: latency-svc-prj2c [790.716085ms]
Feb 12 12:28:12.278: INFO: Created: latency-svc-cld6h
Feb 12 12:28:12.286: INFO: Got endpoints: latency-svc-cld6h [967.307362ms]
Feb 12 12:28:12.343: INFO: Created: latency-svc-q5hwm
Feb 12 12:28:12.614: INFO: Got endpoints: latency-svc-q5hwm [1.296414611s]
Feb 12 12:28:12.636: INFO: Created: latency-svc-xc94t
Feb 12 12:28:12.650: INFO: Got endpoints: latency-svc-xc94t [1.331172118s]
Feb 12 12:28:12.682: INFO: Created: latency-svc-tm42f
Feb 12 12:28:12.869: INFO: Got endpoints: latency-svc-tm42f [1.550293295s]
Feb 12 12:28:12.889: INFO: Created: latency-svc-fw9wr
Feb 12 12:28:12.966: INFO: Got endpoints: latency-svc-fw9wr [1.646556591s]
Feb 12 12:28:13.187: INFO: Created: latency-svc-2q5wt
Feb 12 12:28:13.217: INFO: Got endpoints: latency-svc-2q5wt [1.89718702s]
Feb 12 12:28:13.442: INFO: Created: latency-svc-5pxjl
Feb 12 12:28:13.473: INFO: Got endpoints: latency-svc-5pxjl [2.153083728s]
Feb 12 12:28:13.534: INFO: Created: latency-svc-mwrvz
Feb 12 12:28:13.647: INFO: Got endpoints: latency-svc-mwrvz [2.327347581s]
Feb 12 12:28:13.675: INFO: Created: latency-svc-lj6xb
Feb 12 12:28:13.693: INFO: Got endpoints: latency-svc-lj6xb [2.373241565s]
Feb 12 12:28:13.734: INFO: Created: latency-svc-mbt8q
Feb 12 12:28:13.916: INFO: Got endpoints: latency-svc-mbt8q [2.597403144s]
Feb 12 12:28:13.948: INFO: Created: latency-svc-czffr
Feb 12 12:28:13.978: INFO: Got endpoints: latency-svc-czffr [2.657820492s]
Feb 12 12:28:14.004: INFO: Created: latency-svc-v68f4
Feb 12 12:28:14.197: INFO: Got endpoints: latency-svc-v68f4 [2.563915982s]
Feb 12 12:28:14.277: INFO: Created: latency-svc-npvcf
Feb 12 12:28:14.414: INFO: Got endpoints: latency-svc-npvcf [2.717459444s]
Feb 12 12:28:14.454: INFO: Created: latency-svc-2p88v
Feb 12 12:28:14.475: INFO: Got endpoints: latency-svc-2p88v [2.405785299s]
Feb 12 12:28:14.634: INFO: Created: latency-svc-4h2dp
Feb 12 12:28:14.701: INFO: Got endpoints: latency-svc-4h2dp [2.592595224s]
Feb 12 12:28:14.818: INFO: Created: latency-svc-zv44h
Feb 12 12:28:14.867: INFO: Got endpoints: latency-svc-zv44h [2.581157358s]
Feb 12 12:28:15.004: INFO: Created: latency-svc-5wvc4
Feb 12 12:28:15.026: INFO: Got endpoints: latency-svc-5wvc4 [2.411767532s]
Feb 12 12:28:15.227: INFO: Created: latency-svc-v4zjg
Feb 12 12:28:15.242: INFO: Got endpoints: latency-svc-v4zjg [2.592098049s]
Feb 12 12:28:15.292: INFO: Created: latency-svc-wmhmf
Feb 12 12:28:15.313: INFO: Got endpoints: latency-svc-wmhmf [2.443355293s]
Feb 12 12:28:15.522: INFO: Created: latency-svc-sbfs2
Feb 12 12:28:15.536: INFO: Got endpoints: latency-svc-sbfs2 [2.570006994s]
Feb 12 12:28:15.618: INFO: Created: latency-svc-hggv9
Feb 12 12:28:15.747: INFO: Got endpoints: latency-svc-hggv9 [2.530397514s]
Feb 12 12:28:15.772: INFO: Created: latency-svc-psznt
Feb 12 12:28:15.779: INFO: Got endpoints: latency-svc-psznt [2.305938114s]
Feb 12 12:28:15.841: INFO: Created: latency-svc-gstcv
Feb 12 12:28:15.957: INFO: Got endpoints: latency-svc-gstcv [2.31055727s]
Feb 12 12:28:15.982: INFO: Created: latency-svc-pbqtm
Feb 12 12:28:16.011: INFO: Got endpoints: latency-svc-pbqtm [2.317462124s]
Feb 12 12:28:16.143: INFO: Created: latency-svc-8x7m4
Feb 12 12:28:16.161: INFO: Got endpoints: latency-svc-8x7m4 [2.244790526s]
Feb 12 12:28:16.220: INFO: Created: latency-svc-t72df
Feb 12 12:28:16.286: INFO: Got endpoints: latency-svc-t72df [2.307302121s]
Feb 12 12:28:16.303: INFO: Created: latency-svc-dqc6b
Feb 12 12:28:16.321: INFO: Got endpoints: latency-svc-dqc6b [2.123604211s]
Feb 12 12:28:16.373: INFO: Created: latency-svc-vv5pd
Feb 12 12:28:16.559: INFO: Got endpoints: latency-svc-vv5pd [2.14432219s]
Feb 12 12:28:16.641: INFO: Created: latency-svc-z47vn
Feb 12 12:28:16.855: INFO: Created: latency-svc-5dgtg
Feb 12 12:28:16.855: INFO: Got endpoints: latency-svc-z47vn [2.380566604s]
Feb 12 12:28:16.901: INFO: Got endpoints: latency-svc-5dgtg [2.199999386s]
Feb 12 12:28:17.075: INFO: Created: latency-svc-2zgh5
Feb 12 12:28:17.085: INFO: Got endpoints: latency-svc-2zgh5 [2.216991596s]
Feb 12 12:28:17.267: INFO: Created: latency-svc-lp2dx
Feb 12 12:28:17.287: INFO: Got endpoints: latency-svc-lp2dx [2.260999026s]
Feb 12 12:28:17.519: INFO: Created: latency-svc-nzzc7
Feb 12 12:28:17.555: INFO: Got endpoints: latency-svc-nzzc7 [2.312828641s]
Feb 12 12:28:17.727: INFO: Created: latency-svc-q8g9v
Feb 12 12:28:17.747: INFO: Got endpoints: latency-svc-q8g9v [2.433277072s]
Feb 12 12:28:17.812: INFO: Created: latency-svc-dbtcb
Feb 12 12:28:17.812: INFO: Got endpoints: latency-svc-dbtcb [2.27540335s]
Feb 12 12:28:17.977: INFO: Created: latency-svc-64k5j
Feb 12 12:28:18.000: INFO: Got endpoints: latency-svc-64k5j [2.252568927s]
Feb 12 12:28:18.060: INFO: Created: latency-svc-5hn5g
Feb 12 12:28:18.194: INFO: Got endpoints: latency-svc-5hn5g [2.415625061s]
Feb 12 12:28:18.229: INFO: Created: latency-svc-qlxn5
Feb 12 12:28:18.244: INFO: Got endpoints: latency-svc-qlxn5 [2.285790157s]
Feb 12 12:28:18.395: INFO: Created: latency-svc-ks9nv
Feb 12 12:28:18.456: INFO: Got endpoints: latency-svc-ks9nv [2.445415547s]
Feb 12 12:28:18.665: INFO: Created: latency-svc-svqb7
Feb 12 12:28:18.841: INFO: Got endpoints: latency-svc-svqb7 [2.679554164s]
Feb 12 12:28:18.864: INFO: Created: latency-svc-hbrwd
Feb 12 12:28:18.904: INFO: Got endpoints: latency-svc-hbrwd [2.618276797s]
Feb 12 12:28:19.100: INFO: Created: latency-svc-zdwtv
Feb 12 12:28:19.280: INFO: Got endpoints: latency-svc-zdwtv [2.958428323s]
Feb 12 12:28:19.321: INFO: Created: latency-svc-ndw5w
Feb 12 12:28:19.326: INFO: Got endpoints: latency-svc-ndw5w [2.766992878s]
Feb 12 12:28:19.371: INFO: Created: latency-svc-9b4fz
Feb 12 12:28:19.525: INFO: Got endpoints: latency-svc-9b4fz [2.669920512s]
Feb 12 12:28:19.595: INFO: Created: latency-svc-8sgrp
Feb 12 12:28:19.628: INFO: Got endpoints: latency-svc-8sgrp [2.726958192s]
Feb 12 12:28:19.746: INFO: Created: latency-svc-6pbl7
Feb 12 12:28:19.768: INFO: Got endpoints: latency-svc-6pbl7 [2.682758822s]
Feb 12 12:28:19.923: INFO: Created: latency-svc-wxrcz
Feb 12 12:28:19.935: INFO: Got endpoints: latency-svc-wxrcz [2.647997754s]
Feb 12 12:28:19.967: INFO: Created: latency-svc-lxmwt
Feb 12 12:28:19.991: INFO: Got endpoints: latency-svc-lxmwt [2.435380129s]
Feb 12 12:28:20.176: INFO: Created: latency-svc-ngcpt
Feb 12 12:28:20.248: INFO: Got endpoints: latency-svc-ngcpt [2.501499371s]
Feb 12 12:28:20.516: INFO: Created: latency-svc-gwrrr
Feb 12 12:28:20.627: INFO: Got endpoints: latency-svc-gwrrr [2.815253849s]
Feb 12 12:28:20.644: INFO: Created: latency-svc-cnzm6
Feb 12 12:28:20.649: INFO: Got endpoints: latency-svc-cnzm6 [2.648705525s]
Feb 12 12:28:20.870: INFO: Created: latency-svc-kwgss
Feb 12 12:28:20.904: INFO: Got endpoints: latency-svc-kwgss [2.709459435s]
Feb 12 12:28:21.048: INFO: Created: latency-svc-gmxgx
Feb 12 12:28:21.066: INFO: Got endpoints: latency-svc-gmxgx [2.822529602s]
Feb 12 12:28:21.122: INFO: Created: latency-svc-295nk
Feb 12 12:28:21.252: INFO: Got endpoints: latency-svc-295nk [2.795385844s]
Feb 12 12:28:21.278: INFO: Created: latency-svc-xj96d
Feb 12 12:28:21.304: INFO: Got endpoints: latency-svc-xj96d [2.462634455s]
Feb 12 12:28:21.480: INFO: Created: latency-svc-sdtzb
Feb 12 12:28:21.552: INFO: Got endpoints: latency-svc-sdtzb [2.647514709s]
Feb 12 12:28:21.561: INFO: Created: latency-svc-8zmcz
Feb 12 12:28:21.718: INFO: Got endpoints: latency-svc-8zmcz [2.437783704s]
Feb 12 12:28:21.742: INFO: Created: latency-svc-s4w4p
Feb 12 12:28:21.757: INFO: Got endpoints: latency-svc-s4w4p [2.43095893s]
Feb 12 12:28:21.820: INFO: Created: latency-svc-tzgnj
Feb 12 12:28:21.973: INFO: Got endpoints: latency-svc-tzgnj [2.447413763s]
Feb 12 12:28:22.033: INFO: Created: latency-svc-mth8g
Feb 12 12:28:22.081: INFO: Got endpoints: latency-svc-mth8g [2.452181301s]
Feb 12 12:28:22.313: INFO: Created: latency-svc-8w8jw
Feb 12 12:28:22.564: INFO: Got endpoints: latency-svc-8w8jw [2.795030632s]
Feb 12 12:28:22.612: INFO: Created: latency-svc-mk9bz
Feb 12 12:28:22.817: INFO: Got endpoints: latency-svc-mk9bz [2.88110866s]
Feb 12 12:28:22.830: INFO: Created: latency-svc-56gjv
Feb 12 12:28:22.842: INFO: Got endpoints: latency-svc-56gjv [2.851189615s]
Feb 12 12:28:23.097: INFO: Created: latency-svc-x5qzp
Feb 12 12:28:23.187: INFO: Created: latency-svc-tcqnn
Feb 12 12:28:23.195: INFO: Got endpoints: latency-svc-x5qzp [2.946488394s]
Feb 12 12:28:23.345: INFO: Got endpoints: latency-svc-tcqnn [2.717501334s]
Feb 12 12:28:23.416: INFO: Created: latency-svc-n7wjq
Feb 12 12:28:23.439: INFO: Got endpoints: latency-svc-n7wjq [2.790280577s]
Feb 12 12:28:23.581: INFO: Created: latency-svc-nwldv
Feb 12 12:28:23.602: INFO: Got endpoints: latency-svc-nwldv [2.697280967s]
Feb 12 12:28:23.678: INFO: Created: latency-svc-d2jcs
Feb 12 12:28:25.083: INFO: Got endpoints: latency-svc-d2jcs [4.016805412s]
Feb 12 12:28:25.168: INFO: Created: latency-svc-l4mb7
Feb 12 12:28:25.182: INFO: Got endpoints: latency-svc-l4mb7 [3.930407459s]
Feb 12 12:28:25.326: INFO: Created: latency-svc-fc792
Feb 12 12:28:25.389: INFO: Got endpoints: latency-svc-fc792 [4.085337897s]
Feb 12 12:28:25.621: INFO: Created: latency-svc-6vgf8
Feb 12 12:28:25.649: INFO: Got endpoints: latency-svc-6vgf8 [4.096621073s]
Feb 12 12:28:25.729: INFO: Created: latency-svc-2j7vx
Feb 12 12:28:25.794: INFO: Got endpoints: latency-svc-2j7vx [4.076048308s]
Feb 12 12:28:25.839: INFO: Created: latency-svc-m7kkx
Feb 12 12:28:25.873: INFO: Got endpoints: latency-svc-m7kkx [4.11550123s]
Feb 12 12:28:26.055: INFO: Created: latency-svc-ppbmb
Feb 12 12:28:26.074: INFO: Got endpoints: latency-svc-ppbmb [4.100972295s]
Feb 12 12:28:26.229: INFO: Created: latency-svc-ssdpj
Feb 12 12:28:26.245: INFO: Got endpoints: latency-svc-ssdpj [4.164244134s]
Feb 12 12:28:26.316: INFO: Created: latency-svc-hc7cd
Feb 12 12:28:26.411: INFO: Got endpoints: latency-svc-hc7cd [3.847457279s]
Feb 12 12:28:26.465: INFO: Created: latency-svc-f7hq9
Feb 12 12:28:26.523: INFO: Got endpoints: latency-svc-f7hq9 [3.706070854s]
Feb 12 12:28:26.649: INFO: Created: latency-svc-zc8dt
Feb 12 12:28:26.699: INFO: Got endpoints: latency-svc-zc8dt [3.856071913s]
Feb 12 12:28:26.909: INFO: Created: latency-svc-m4xks
Feb 12 12:28:27.392: INFO: Got endpoints: latency-svc-m4xks [4.196579412s]
Feb 12 12:28:27.415: INFO: Created: latency-svc-zsvs5
Feb 12 12:28:27.899: INFO: Got endpoints: latency-svc-zsvs5 [4.554307312s]
Feb 12 12:28:27.945: INFO: Created: latency-svc-ttcds
Feb 12 12:28:27.972: INFO: Got endpoints: latency-svc-ttcds [4.531658702s]
Feb 12 12:28:28.134: INFO: Created: latency-svc-l7rjf
Feb 12 12:28:28.187: INFO: Got endpoints: latency-svc-l7rjf [4.585211006s]
Feb 12 12:28:28.328: INFO: Created: latency-svc-sldzc
Feb 12 12:28:28.341: INFO: Got endpoints: latency-svc-sldzc [3.257944719s]
Feb 12 12:28:28.657: INFO: Created: latency-svc-9qf7t
Feb 12 12:28:28.657: INFO: Got endpoints: latency-svc-9qf7t [3.474916034s]
Feb 12 12:28:28.848: INFO: Created: latency-svc-rpmdr
Feb 12 12:28:28.866: INFO: Got endpoints: latency-svc-rpmdr [3.476672967s]
Feb 12 12:28:29.025: INFO: Created: latency-svc-5r2fv
Feb 12 12:28:29.035: INFO: Got endpoints: latency-svc-5r2fv [3.385801155s]
Feb 12 12:28:29.083: INFO: Created: latency-svc-8xkr2
Feb 12 12:28:29.101: INFO: Got endpoints: latency-svc-8xkr2 [3.306095472s]
Feb 12 12:28:29.246: INFO: Created: latency-svc-bpssg
Feb 12 12:28:29.246: INFO: Got endpoints: latency-svc-bpssg [3.372004188s]
Feb 12 12:28:29.290: INFO: Created: latency-svc-25pqj
Feb 12 12:28:29.306: INFO: Got endpoints: latency-svc-25pqj [3.231588781s]
Feb 12 12:28:29.439: INFO: Created: latency-svc-9vv74
Feb 12 12:28:29.451: INFO: Got endpoints: latency-svc-9vv74 [3.205314969s]
Feb 12 12:28:29.584: INFO: Created: latency-svc-vrj6t
Feb 12 12:28:29.590: INFO: Got endpoints: latency-svc-vrj6t [3.178157649s]
Feb 12 12:28:29.660: INFO: Created: latency-svc-8tq8z
Feb 12 12:28:29.660: INFO: Got endpoints: latency-svc-8tq8z [3.136581188s]
Feb 12 12:28:29.796: INFO: Created: latency-svc-2kwvl
Feb 12 12:28:29.814: INFO: Got endpoints: latency-svc-2kwvl [3.115291042s]
Feb 12 12:28:29.889: INFO: Created: latency-svc-wwxl2
Feb 12 12:28:29.954: INFO: Got endpoints: latency-svc-wwxl2 [2.561503314s]
Feb 12 12:28:30.035: INFO: Created: latency-svc-46jkf
Feb 12 12:28:30.168: INFO: Got endpoints: latency-svc-46jkf [2.268238571s]
Feb 12 12:28:30.187: INFO: Created: latency-svc-k4zn7
Feb 12 12:28:30.192: INFO: Got endpoints: latency-svc-k4zn7 [2.220567436s]
Feb 12 12:28:30.254: INFO: Created: latency-svc-dp75x
Feb 12 12:28:30.341: INFO: Got endpoints: latency-svc-dp75x [2.154119365s]
Feb 12 12:28:30.368: INFO: Created: latency-svc-2blvw
Feb 12 12:28:30.390: INFO: Got endpoints: latency-svc-2blvw [2.048853473s]
Feb 12 12:28:30.439: INFO: Created: latency-svc-c5dn5
Feb 12 12:28:30.558: INFO: Got endpoints: latency-svc-c5dn5 [1.900126599s]
Feb 12 12:28:30.587: INFO: Created: latency-svc-5gkt7
Feb 12 12:28:30.605: INFO: Got endpoints: latency-svc-5gkt7 [1.738673679s]
Feb 12 12:28:30.738: INFO: Created: latency-svc-rqgmv
Feb 12 12:28:30.746: INFO: Got endpoints: latency-svc-rqgmv [1.710725333s]
Feb 12 12:28:30.809: INFO: Created: latency-svc-dccrh
Feb 12 12:28:30.824: INFO: Got endpoints: latency-svc-dccrh [1.723199756s]
Feb 12 12:28:30.952: INFO: Created: latency-svc-bvh52
Feb 12 12:28:30.959: INFO: Got endpoints: latency-svc-bvh52 [1.712832909s]
Feb 12 12:28:31.014: INFO: Created: latency-svc-pnj45
Feb 12 12:28:31.028: INFO: Got endpoints: latency-svc-pnj45 [1.72168996s]
Feb 12 12:28:31.162: INFO: Created: latency-svc-mdtdr
Feb 12 12:28:31.169: INFO: Got endpoints: latency-svc-mdtdr [1.717592275s]
Feb 12 12:28:31.305: INFO: Created: latency-svc-5wnrk
Feb 12 12:28:31.317: INFO: Got endpoints: latency-svc-5wnrk [1.726077789s]
Feb 12 12:28:31.385: INFO: Created: latency-svc-8kw2l
Feb 12 12:28:31.468: INFO: Got endpoints: latency-svc-8kw2l [1.807807834s]
Feb 12 12:28:31.538: INFO: Created: latency-svc-nlpq6
Feb 12 12:28:31.783: INFO: Created: latency-svc-67wns
Feb 12 12:28:31.850: INFO: Got endpoints: latency-svc-nlpq6 [2.0359815s]
Feb 12 12:28:31.864: INFO: Created: latency-svc-mjvp2
Feb 12 12:28:31.959: INFO: Got endpoints: latency-svc-mjvp2 [1.790549133s]
Feb 12 12:28:31.977: INFO: Got endpoints: latency-svc-67wns [2.022712106s]
Feb 12 12:28:32.025: INFO: Created: latency-svc-sll67
Feb 12 12:28:32.161: INFO: Got endpoints: latency-svc-sll67 [1.968239188s]
Feb 12 12:28:32.187: INFO: Created: latency-svc-zm5dq
Feb 12 12:28:32.222: INFO: Got endpoints: latency-svc-zm5dq [1.880435342s]
Feb 12 12:28:32.334: INFO: Created: latency-svc-ccnms
Feb 12 12:28:32.377: INFO: Got endpoints: latency-svc-ccnms [1.986603108s]
Feb 12 12:28:32.530: INFO: Created: latency-svc-b84ch
Feb 12 12:28:32.576: INFO: Got endpoints: latency-svc-b84ch [2.017891166s]
Feb 12 12:28:32.619: INFO: Created: latency-svc-c97g9
Feb 12 12:28:32.764: INFO: Got endpoints: latency-svc-c97g9 [2.158814782s]
Feb 12 12:28:32.832: INFO: Created: latency-svc-pwgvk
Feb 12 12:28:32.844: INFO: Got endpoints: latency-svc-pwgvk [2.098384448s]
Feb 12 12:28:33.062: INFO: Created: latency-svc-h7r79
Feb 12 12:28:33.076: INFO: Got endpoints: latency-svc-h7r79 [2.251916589s]
Feb 12 12:28:33.322: INFO: Created: latency-svc-ws7bc
Feb 12 12:28:33.332: INFO: Got endpoints: latency-svc-ws7bc [2.3732913s]
Feb 12 12:28:33.403: INFO: Created: latency-svc-2zwtm
Feb 12 12:28:33.572: INFO: Got endpoints: latency-svc-2zwtm [2.544288581s]
Feb 12 12:28:33.635: INFO: Created: latency-svc-wxwzh
Feb 12 12:28:33.639: INFO: Got endpoints: latency-svc-wxwzh [2.47060466s]
Feb 12 12:28:33.809: INFO: Created: latency-svc-pssbg
Feb 12 12:28:33.995: INFO: Got endpoints: latency-svc-pssbg [2.678116204s]
Feb 12 12:28:33.997: INFO: Created: latency-svc-gzrtq
Feb 12 12:28:34.040: INFO: Got endpoints: latency-svc-gzrtq [2.571297981s]
Feb 12 12:28:34.204: INFO: Created: latency-svc-v6rpw
Feb 12 12:28:34.233: INFO: Got endpoints: latency-svc-v6rpw [2.382397831s]
Feb 12 12:28:34.345: INFO: Created: latency-svc-jzsn4
Feb 12 12:28:34.460: INFO: Got endpoints: latency-svc-jzsn4 [2.500837895s]
Feb 12 12:28:34.494: INFO: Created: latency-svc-x5f9j
Feb 12 12:28:34.537: INFO: Got endpoints: latency-svc-x5f9j [2.560321291s]
Feb 12 12:28:34.674: INFO: Created: latency-svc-85w4c
Feb 12 12:28:34.709: INFO: Got endpoints: latency-svc-85w4c [2.547453701s]
Feb 12 12:28:34.830: INFO: Created: latency-svc-54fxn
Feb 12 12:28:34.843: INFO: Got endpoints: latency-svc-54fxn [2.620742381s]
Feb 12 12:28:34.897: INFO: Created: latency-svc-j2glk
Feb 12 12:28:35.046: INFO: Got endpoints: latency-svc-j2glk [2.668686966s]
Feb 12 12:28:35.106: INFO: Created: latency-svc-4wx9l
Feb 12 12:28:35.367: INFO: Got endpoints: latency-svc-4wx9l [2.790932738s]
Feb 12 12:28:35.414: INFO: Created: latency-svc-9gvxl
Feb 12 12:28:35.414: INFO: Got endpoints: latency-svc-9gvxl [2.649860998s]
Feb 12 12:28:35.459: INFO: Created: latency-svc-2xdfx
Feb 12 12:28:35.641: INFO: Got endpoints: latency-svc-2xdfx [2.796225164s]
Feb 12 12:28:35.666: INFO: Created: latency-svc-74lhx
Feb 12 12:28:35.800: INFO: Got endpoints: latency-svc-74lhx [2.723038225s]
Feb 12 12:28:35.829: INFO: Created: latency-svc-nsw9h
Feb 12 12:28:35.854: INFO: Got endpoints: latency-svc-nsw9h [2.521128219s]
Feb 12 12:28:36.045: INFO: Created: latency-svc-7fvn5
Feb 12 12:28:36.071: INFO: Got endpoints: latency-svc-7fvn5 [2.497964275s]
Feb 12 12:28:36.277: INFO: Created: latency-svc-fqbkz
Feb 12 12:28:36.303: INFO: Got endpoints: latency-svc-fqbkz [2.663096435s]
Feb 12 12:28:36.366: INFO: Created: latency-svc-6v2jj
Feb 12 12:28:36.466: INFO: Got endpoints: latency-svc-6v2jj [2.470150497s]
Feb 12 12:28:36.525: INFO: Created: latency-svc-mfdrv
Feb 12 12:28:36.700: INFO: Got endpoints: latency-svc-mfdrv [2.659580807s]
Feb 12 12:28:36.762: INFO: Created: latency-svc-dhgxr
Feb 12 12:28:36.886: INFO: Got endpoints: latency-svc-dhgxr [2.652353137s]
Feb 12 12:28:36.966: INFO: Created: latency-svc-62c2m
Feb 12 12:28:36.969: INFO: Got endpoints: latency-svc-62c2m [2.508489022s]
Feb 12 12:28:37.185: INFO: Created: latency-svc-hcn7c
Feb 12 12:28:37.211: INFO: Got endpoints: latency-svc-hcn7c [2.673360908s]
Feb 12 12:28:37.262: INFO: Created: latency-svc-bztpj
Feb 12 12:28:37.383: INFO: Got endpoints: latency-svc-bztpj [2.674004017s]
Feb 12 12:28:37.400: INFO: Created: latency-svc-9l8mj
Feb 12 12:28:37.413: INFO: Got endpoints: latency-svc-9l8mj [2.570306446s]
Feb 12 12:28:37.470: INFO: Created: latency-svc-jcj8q
Feb 12 12:28:37.571: INFO: Got endpoints: latency-svc-jcj8q [2.524823987s]
Feb 12 12:28:37.588: INFO: Created: latency-svc-56b9h
Feb 12 12:28:37.609: INFO: Got endpoints: latency-svc-56b9h [2.241254804s]
Feb 12 12:28:37.653: INFO: Created: latency-svc-zqs6d
Feb 12 12:28:37.663: INFO: Got endpoints: latency-svc-zqs6d [2.248822131s]
Feb 12 12:28:37.860: INFO: Created: latency-svc-rlfdp
Feb 12 12:28:37.880: INFO: Got endpoints: latency-svc-rlfdp [2.238971181s]
Feb 12 12:28:38.024: INFO: Created: latency-svc-bldqb
Feb 12 12:28:38.042: INFO: Got endpoints: latency-svc-bldqb [2.241551011s]
Feb 12 12:28:38.115: INFO: Created: latency-svc-8qqjn
Feb 12 12:28:38.311: INFO: Got endpoints: latency-svc-8qqjn [2.45704864s]
Feb 12 12:28:38.341: INFO: Created: latency-svc-crdmv
Feb 12 12:28:38.345: INFO: Got endpoints: latency-svc-crdmv [2.274130105s]
Feb 12 12:28:38.421: INFO: Created: latency-svc-nmc9n
Feb 12 12:28:38.515: INFO: Got endpoints: latency-svc-nmc9n [2.211717534s]
Feb 12 12:28:38.537: INFO: Created: latency-svc-lgrst
Feb 12 12:28:38.559: INFO: Got endpoints: latency-svc-lgrst [2.092442348s]
Feb 12 12:28:38.708: INFO: Created: latency-svc-7frwk
Feb 12 12:28:38.720: INFO: Got endpoints: latency-svc-7frwk [2.020550641s]
Feb 12 12:28:38.764: INFO: Created: latency-svc-mdd6p
Feb 12 12:28:38.776: INFO: Got endpoints: latency-svc-mdd6p [1.88969955s]
Feb 12 12:28:38.935: INFO: Created: latency-svc-6tfvw
Feb 12 12:28:39.608: INFO: Got endpoints: latency-svc-6tfvw [2.639483946s]
Feb 12 12:28:39.657: INFO: Created: latency-svc-n8d7f
Feb 12 12:28:39.683: INFO: Got endpoints: latency-svc-n8d7f [2.471424197s]
Feb 12 12:28:39.835: INFO: Created: latency-svc-p9g5d
Feb 12 12:28:39.861: INFO: Got endpoints: latency-svc-p9g5d [2.477739712s]
Feb 12 12:28:39.914: INFO: Created: latency-svc-vzblf
Feb 12 12:28:40.044: INFO: Got endpoints: latency-svc-vzblf [2.63078138s]
Feb 12 12:28:40.072: INFO: Created: latency-svc-977g4
Feb 12 12:28:40.263: INFO: Got endpoints: latency-svc-977g4 [2.690947882s]
Feb 12 12:28:40.275: INFO: Created: latency-svc-zd7p9
Feb 12 12:28:40.311: INFO: Got endpoints: latency-svc-zd7p9 [2.70147892s]
Feb 12 12:28:40.338: INFO: Created: latency-svc-glgzm
Feb 12 12:28:40.464: INFO: Got endpoints: latency-svc-glgzm [2.800701547s]
Feb 12 12:28:40.537: INFO: Created: latency-svc-22c4d
Feb 12 12:28:40.640: INFO: Got endpoints: latency-svc-22c4d [2.760103055s]
Feb 12 12:28:40.650: INFO: Created: latency-svc-k9745
Feb 12 12:28:40.681: INFO: Got endpoints: latency-svc-k9745 [2.638502254s]
Feb 12 12:28:40.721: INFO: Created: latency-svc-4sj8p
Feb 12 12:28:40.818: INFO: Got endpoints: latency-svc-4sj8p [2.506362463s]
Feb 12 12:28:40.841: INFO: Created: latency-svc-m9l9p
Feb 12 12:28:40.866: INFO: Got endpoints: latency-svc-m9l9p [2.520669328s]
Feb 12 12:28:40.914: INFO: Created: latency-svc-nvbsp
Feb 12 12:28:41.000: INFO: Got endpoints: latency-svc-nvbsp [2.485432314s]
Feb 12 12:28:41.016: INFO: Created: latency-svc-sxh6w
Feb 12 12:28:41.036: INFO: Got endpoints: latency-svc-sxh6w [2.47708237s]
Feb 12 12:28:41.082: INFO: Created: latency-svc-zl2fq
Feb 12 12:28:41.182: INFO: Got endpoints: latency-svc-zl2fq [2.461048283s]
Feb 12 12:28:41.212: INFO: Created: latency-svc-nk85q
Feb 12 12:28:41.227: INFO: Got endpoints: latency-svc-nk85q [2.451534976s]
Feb 12 12:28:41.436: INFO: Created: latency-svc-ng29f
Feb 12 12:28:41.470: INFO: Got endpoints: latency-svc-ng29f [1.861592465s]
Feb 12 12:28:41.586: INFO: Created: latency-svc-8mbqd
Feb 12 12:28:41.610: INFO: Got endpoints: latency-svc-8mbqd [1.926830226s]
Feb 12 12:28:41.645: INFO: Created: latency-svc-nv7lc
Feb 12 12:28:41.648: INFO: Got endpoints: latency-svc-nv7lc [1.786766372s]
Feb 12 12:28:41.752: INFO: Created: latency-svc-rnpnc
Feb 12 12:28:41.786: INFO: Got endpoints: latency-svc-rnpnc [1.740719539s]
Feb 12 12:28:41.832: INFO: Created: latency-svc-9k4wh
Feb 12 12:28:42.746: INFO: Got endpoints: latency-svc-9k4wh [2.483181916s]
Feb 12 12:28:43.680: INFO: Created: latency-svc-9sgv4
Feb 12 12:28:43.731: INFO: Got endpoints: latency-svc-9sgv4 [3.420007827s]
Feb 12 12:28:45.216: INFO: Created: latency-svc-5vbfw
Feb 12 12:28:45.330: INFO: Got endpoints: latency-svc-5vbfw [4.865472201s]
Feb 12 12:28:45.375: INFO: Created: latency-svc-hz9g6
Feb 12 12:28:45.404: INFO: Got endpoints: latency-svc-hz9g6 [4.762924193s]
Feb 12 12:28:45.568: INFO: Created: latency-svc-6nq4z
Feb 12 12:28:45.577: INFO: Got endpoints: latency-svc-6nq4z [4.896227064s]
Feb 12 12:28:45.721: INFO: Created: latency-svc-s72qd
Feb 12 12:28:45.773: INFO: Got endpoints: latency-svc-s72qd [4.954301737s]
Feb 12 12:28:45.867: INFO: Created: latency-svc-4ptbv
Feb 12 12:28:46.042: INFO: Got endpoints: latency-svc-4ptbv [5.175454821s]
Feb 12 12:28:46.133: INFO: Created: latency-svc-gf6rz
Feb 12 12:28:46.226: INFO: Got endpoints: latency-svc-gf6rz [5.225776965s]
Feb 12 12:28:46.272: INFO: Created: latency-svc-n6fl7
Feb 12 12:28:46.298: INFO: Got endpoints: latency-svc-n6fl7 [5.261253982s]
Feb 12 12:28:46.527: INFO: Created: latency-svc-s796m
Feb 12 12:28:46.529: INFO: Got endpoints: latency-svc-s796m [5.347175068s]
Feb 12 12:28:46.570: INFO: Created: latency-svc-gncnr
Feb 12 12:28:46.653: INFO: Got endpoints: latency-svc-gncnr [5.424883867s]
Feb 12 12:28:46.702: INFO: Created: latency-svc-gzpdn
Feb 12 12:28:46.739: INFO: Got endpoints: latency-svc-gzpdn [5.268345952s]
Feb 12 12:28:46.852: INFO: Created: latency-svc-hqrc8
Feb 12 12:28:46.901: INFO: Got endpoints: latency-svc-hqrc8 [5.291566657s]
Feb 12 12:28:47.042: INFO: Created: latency-svc-7zqdc
Feb 12 12:28:47.073: INFO: Got endpoints: latency-svc-7zqdc [5.424583722s]
Feb 12 12:28:47.135: INFO: Created: latency-svc-s548v
Feb 12 12:28:47.321: INFO: Got endpoints: latency-svc-s548v [5.534925074s]
Feb 12 12:28:47.351: INFO: Created: latency-svc-l2xbw
Feb 12 12:28:47.427: INFO: Got endpoints: latency-svc-l2xbw [4.680226745s]
Feb 12 12:28:47.894: INFO: Created: latency-svc-gqs88
Feb 12 12:28:48.240: INFO: Got endpoints: latency-svc-gqs88 [4.508670181s]
Feb 12 12:28:49.006: INFO: Created: latency-svc-krxkn
Feb 12 12:28:49.026: INFO: Got endpoints: latency-svc-krxkn [3.696188456s]
Feb 12 12:28:49.093: INFO: Created: latency-svc-j5tlj
Feb 12 12:28:49.202: INFO: Got endpoints: latency-svc-j5tlj [3.798428702s]
Feb 12 12:28:49.240: INFO: Created: latency-svc-l98sz
Feb 12 12:28:49.297: INFO: Got endpoints: latency-svc-l98sz [3.719878759s]
Feb 12 12:28:49.389: INFO: Created: latency-svc-hqhtx
Feb 12 12:28:49.408: INFO: Got endpoints: latency-svc-hqhtx [3.635111997s]
Feb 12 12:28:49.459: INFO: Created: latency-svc-wrtlh
Feb 12 12:28:49.533: INFO: Got endpoints: latency-svc-wrtlh [3.491139929s]
Feb 12 12:28:49.565: INFO: Created: latency-svc-4hpcq
Feb 12 12:28:49.591: INFO: Got endpoints: latency-svc-4hpcq [3.364466115s]
Feb 12 12:28:49.784: INFO: Created: latency-svc-zvscl
Feb 12 12:28:49.815: INFO: Got endpoints: latency-svc-zvscl [3.517099972s]
Feb 12 12:28:49.816: INFO: Latencies: [314.645794ms 375.694118ms 749.452963ms 790.716085ms 967.307362ms 1.296414611s 1.331172118s 1.550293295s 1.646556591s 1.710725333s 1.712832909s 1.717592275s 1.72168996s 1.723199756s 1.726077789s 1.738673679s 1.740719539s 1.786766372s 1.790549133s 1.807807834s 1.861592465s 1.880435342s 1.88969955s 1.89718702s 1.900126599s 1.926830226s 1.968239188s 1.986603108s 2.017891166s 2.020550641s 2.022712106s 2.0359815s 2.048853473s 2.092442348s 2.098384448s 2.123604211s 2.14432219s 2.153083728s 2.154119365s 2.158814782s 2.199999386s 2.211717534s 2.216991596s 2.220567436s 2.238971181s 2.241254804s 2.241551011s 2.244790526s 2.248822131s 2.251916589s 2.252568927s 2.260999026s 2.268238571s 2.274130105s 2.27540335s 2.285790157s 2.305938114s 2.307302121s 2.31055727s 2.312828641s 2.317462124s 2.327347581s 2.373241565s 2.3732913s 2.380566604s 2.382397831s 2.405785299s 2.411767532s 2.415625061s 2.43095893s 2.433277072s 2.435380129s 2.437783704s 2.443355293s 2.445415547s 2.447413763s 2.451534976s 2.452181301s 2.45704864s 2.461048283s 2.462634455s 2.470150497s 2.47060466s 2.471424197s 2.47708237s 2.477739712s 2.483181916s 2.485432314s 2.497964275s 2.500837895s 2.501499371s 2.506362463s 2.508489022s 2.520669328s 2.521128219s 2.524823987s 2.530397514s 2.544288581s 2.547453701s 2.560321291s 2.561503314s 2.563915982s 2.570006994s 2.570306446s 2.571297981s 2.581157358s 2.592098049s 2.592595224s 2.597403144s 2.618276797s 2.620742381s 2.63078138s 2.638502254s 2.639483946s 2.647514709s 2.647997754s 2.648705525s 2.649860998s 2.652353137s 2.657820492s 2.659580807s 2.663096435s 2.668686966s 2.669920512s 2.673360908s 2.674004017s 2.678116204s 2.679554164s 2.682758822s 2.690947882s 2.697280967s 2.70147892s 2.709459435s 2.717459444s 2.717501334s 2.723038225s 2.726958192s 2.760103055s 2.766992878s 2.790280577s 2.790932738s 2.795030632s 2.795385844s 2.796225164s 2.800701547s 2.815253849s 2.822529602s 2.851189615s 2.88110866s 2.946488394s 2.958428323s 3.115291042s 3.136581188s 3.178157649s 3.205314969s 3.231588781s 3.257944719s 3.306095472s 3.364466115s 3.372004188s 3.385801155s 3.420007827s 3.474916034s 3.476672967s 3.491139929s 3.517099972s 3.635111997s 3.696188456s 3.706070854s 3.719878759s 3.798428702s 3.847457279s 3.856071913s 3.930407459s 4.016805412s 4.076048308s 4.085337897s 4.096621073s 4.100972295s 4.11550123s 4.164244134s 4.196579412s 4.508670181s 4.531658702s 4.554307312s 4.585211006s 4.680226745s 4.762924193s 4.865472201s 4.896227064s 4.954301737s 5.175454821s 5.225776965s 5.261253982s 5.268345952s 5.291566657s 5.347175068s 5.424583722s 5.424883867s 5.534925074s]
Feb 12 12:28:49.816: INFO: 50 %ile: 2.561503314s
Feb 12 12:28:49.816: INFO: 90 %ile: 4.164244134s
Feb 12 12:28:49.816: INFO: 99 %ile: 5.424883867s
Feb 12 12:28:49.816: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:28:49.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-w2pbx" for this suite.
Feb 12 12:29:43.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:29:44.007: INFO: namespace: e2e-tests-svc-latency-w2pbx, resource: bindings, ignored listing per whitelist
Feb 12 12:29:44.124: INFO: namespace e2e-tests-svc-latency-w2pbx deletion completed in 54.273774096s

• [SLOW TEST:102.423 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:29:44.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-595d76ea-4d93-11ea-b4b9-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-595d76ea-4d93-11ea-b4b9-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:30:56.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-lwkkr" for this suite.
Feb 12 12:31:20.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:31:21.012: INFO: namespace: e2e-tests-configmap-lwkkr, resource: bindings, ignored listing per whitelist
Feb 12 12:31:21.092: INFO: namespace e2e-tests-configmap-lwkkr deletion completed in 24.378240829s

• [SLOW TEST:96.967 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:31:21.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 12 12:31:21.374: INFO: Waiting up to 5m0s for pod "downwardapi-volume-932f0eeb-4d93-11ea-b4b9-0242ac110005" in namespace "e2e-tests-downward-api-fzh2d" to be "success or failure"
Feb 12 12:31:21.405: INFO: Pod "downwardapi-volume-932f0eeb-4d93-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.314195ms
Feb 12 12:31:23.461: INFO: Pod "downwardapi-volume-932f0eeb-4d93-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087352906s
Feb 12 12:31:25.539: INFO: Pod "downwardapi-volume-932f0eeb-4d93-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164696981s
Feb 12 12:31:27.560: INFO: Pod "downwardapi-volume-932f0eeb-4d93-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.186086494s
Feb 12 12:31:29.575: INFO: Pod "downwardapi-volume-932f0eeb-4d93-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.200900786s
Feb 12 12:31:31.592: INFO: Pod "downwardapi-volume-932f0eeb-4d93-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.217583309s
STEP: Saw pod success
Feb 12 12:31:31.592: INFO: Pod "downwardapi-volume-932f0eeb-4d93-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:31:31.601: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-932f0eeb-4d93-11ea-b4b9-0242ac110005 container client-container: 
STEP: delete the pod
Feb 12 12:31:32.620: INFO: Waiting for pod downwardapi-volume-932f0eeb-4d93-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:31:32.638: INFO: Pod downwardapi-volume-932f0eeb-4d93-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:31:32.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fzh2d" for this suite.
Feb 12 12:31:38.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:31:38.897: INFO: namespace: e2e-tests-downward-api-fzh2d, resource: bindings, ignored listing per whitelist
Feb 12 12:31:39.176: INFO: namespace e2e-tests-downward-api-fzh2d deletion completed in 6.395927366s

• [SLOW TEST:18.080 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:31:39.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:31:51.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-hvm9z" for this suite.
Feb 12 12:31:59.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:31:59.943: INFO: namespace: e2e-tests-kubelet-test-hvm9z, resource: bindings, ignored listing per whitelist
Feb 12 12:32:00.004: INFO: namespace e2e-tests-kubelet-test-hvm9z deletion completed in 8.282135624s

• [SLOW TEST:20.828 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:32:00.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 12 12:32:00.388: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 12 12:32:00.515: INFO: Number of nodes with available pods: 0
Feb 12 12:32:00.515: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:32:01.568: INFO: Number of nodes with available pods: 0
Feb 12 12:32:01.569: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:32:02.571: INFO: Number of nodes with available pods: 0
Feb 12 12:32:02.571: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:32:03.537: INFO: Number of nodes with available pods: 0
Feb 12 12:32:03.537: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:32:04.547: INFO: Number of nodes with available pods: 0
Feb 12 12:32:04.547: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:32:05.529: INFO: Number of nodes with available pods: 0
Feb 12 12:32:05.530: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:32:06.577: INFO: Number of nodes with available pods: 0
Feb 12 12:32:06.577: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:32:07.559: INFO: Number of nodes with available pods: 0
Feb 12 12:32:07.559: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:32:08.568: INFO: Number of nodes with available pods: 0
Feb 12 12:32:08.568: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:32:09.541: INFO: Number of nodes with available pods: 1
Feb 12 12:32:09.541: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 12 12:32:09.626: INFO: Wrong image for pod: daemon-set-jwf5j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 12:32:10.701: INFO: Wrong image for pod: daemon-set-jwf5j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 12:32:11.694: INFO: Wrong image for pod: daemon-set-jwf5j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 12:32:12.702: INFO: Wrong image for pod: daemon-set-jwf5j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 12:32:13.804: INFO: Wrong image for pod: daemon-set-jwf5j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 12:32:14.709: INFO: Wrong image for pod: daemon-set-jwf5j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 12:32:15.695: INFO: Wrong image for pod: daemon-set-jwf5j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 12:32:15.695: INFO: Pod daemon-set-jwf5j is not available
Feb 12 12:32:16.698: INFO: Wrong image for pod: daemon-set-jwf5j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 12:32:16.698: INFO: Pod daemon-set-jwf5j is not available
Feb 12 12:32:17.697: INFO: Wrong image for pod: daemon-set-jwf5j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 12:32:17.697: INFO: Pod daemon-set-jwf5j is not available
Feb 12 12:32:18.698: INFO: Wrong image for pod: daemon-set-jwf5j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 12:32:18.698: INFO: Pod daemon-set-jwf5j is not available
Feb 12 12:32:19.695: INFO: Wrong image for pod: daemon-set-jwf5j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 12:32:19.695: INFO: Pod daemon-set-jwf5j is not available
Feb 12 12:32:20.698: INFO: Wrong image for pod: daemon-set-jwf5j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 12:32:20.698: INFO: Pod daemon-set-jwf5j is not available
Feb 12 12:32:21.703: INFO: Wrong image for pod: daemon-set-jwf5j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 12:32:21.703: INFO: Pod daemon-set-jwf5j is not available
Feb 12 12:32:22.849: INFO: Wrong image for pod: daemon-set-jwf5j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 12 12:32:22.849: INFO: Pod daemon-set-jwf5j is not available
Feb 12 12:32:23.693: INFO: Pod daemon-set-rgr8d is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 12 12:32:23.713: INFO: Number of nodes with available pods: 0
Feb 12 12:32:23.713: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:32:25.063: INFO: Number of nodes with available pods: 0
Feb 12 12:32:25.063: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:32:26.124: INFO: Number of nodes with available pods: 0
Feb 12 12:32:26.124: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:32:26.742: INFO: Number of nodes with available pods: 0
Feb 12 12:32:26.742: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:32:27.727: INFO: Number of nodes with available pods: 0
Feb 12 12:32:27.727: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:32:29.257: INFO: Number of nodes with available pods: 0
Feb 12 12:32:29.257: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:32:29.742: INFO: Number of nodes with available pods: 0
Feb 12 12:32:29.742: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:32:30.760: INFO: Number of nodes with available pods: 0
Feb 12 12:32:30.760: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:32:31.733: INFO: Number of nodes with available pods: 0
Feb 12 12:32:31.733: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:32:32.736: INFO: Number of nodes with available pods: 1
Feb 12 12:32:32.736: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-qhmf8, will wait for the garbage collector to delete the pods
Feb 12 12:32:32.915: INFO: Deleting DaemonSet.extensions daemon-set took: 100.202573ms
Feb 12 12:32:33.016: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.67271ms
Feb 12 12:32:39.327: INFO: Number of nodes with available pods: 0
Feb 12 12:32:39.327: INFO: Number of running nodes: 0, number of available pods: 0
Feb 12 12:32:39.332: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-qhmf8/daemonsets","resourceVersion":"21423101"},"items":null}

Feb 12 12:32:39.336: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-qhmf8/pods","resourceVersion":"21423101"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:32:39.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-qhmf8" for this suite.
Feb 12 12:32:45.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:32:45.607: INFO: namespace: e2e-tests-daemonsets-qhmf8, resource: bindings, ignored listing per whitelist
Feb 12 12:32:45.619: INFO: namespace e2e-tests-daemonsets-qhmf8 deletion completed in 6.260233081s

• [SLOW TEST:45.615 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:32:45.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb 12 12:32:45.780: INFO: namespace e2e-tests-kubectl-v9g7l
Feb 12 12:32:45.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-v9g7l'
Feb 12 12:32:48.878: INFO: stderr: ""
Feb 12 12:32:48.879: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 12 12:32:50.596: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:32:50.596: INFO: Found 0 / 1
Feb 12 12:32:50.905: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:32:50.905: INFO: Found 0 / 1
Feb 12 12:32:51.940: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:32:51.940: INFO: Found 0 / 1
Feb 12 12:32:52.903: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:32:52.903: INFO: Found 0 / 1
Feb 12 12:32:53.933: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:32:53.933: INFO: Found 0 / 1
Feb 12 12:32:54.899: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:32:54.900: INFO: Found 0 / 1
Feb 12 12:32:56.055: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:32:56.055: INFO: Found 0 / 1
Feb 12 12:32:56.920: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:32:56.921: INFO: Found 0 / 1
Feb 12 12:32:58.036: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:32:58.036: INFO: Found 0 / 1
Feb 12 12:32:58.902: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:32:58.902: INFO: Found 1 / 1
Feb 12 12:32:58.902: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 12 12:32:58.908: INFO: Selector matched 1 pods for map[app:redis]
Feb 12 12:32:58.909: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 12 12:32:58.909: INFO: wait on redis-master startup in e2e-tests-kubectl-v9g7l 
Feb 12 12:32:58.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-pwzf9 redis-master --namespace=e2e-tests-kubectl-v9g7l'
Feb 12 12:32:59.171: INFO: stderr: ""
Feb 12 12:32:59.171: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 12 Feb 12:32:57.097 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 Feb 12:32:57.097 # Server started, Redis version 3.2.12\n1:M 12 Feb 12:32:57.097 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 Feb 12:32:57.098 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb 12 12:32:59.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-v9g7l'
Feb 12 12:32:59.492: INFO: stderr: ""
Feb 12 12:32:59.492: INFO: stdout: "service/rm2 exposed\n"
Feb 12 12:32:59.722: INFO: Service rm2 in namespace e2e-tests-kubectl-v9g7l found.
STEP: exposing service
Feb 12 12:33:01.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-v9g7l'
Feb 12 12:33:02.150: INFO: stderr: ""
Feb 12 12:33:02.150: INFO: stdout: "service/rm3 exposed\n"
Feb 12 12:33:02.166: INFO: Service rm3 in namespace e2e-tests-kubectl-v9g7l found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:33:04.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-v9g7l" for this suite.
Feb 12 12:33:28.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:33:28.451: INFO: namespace: e2e-tests-kubectl-v9g7l, resource: bindings, ignored listing per whitelist
Feb 12 12:33:28.561: INFO: namespace e2e-tests-kubectl-v9g7l deletion completed in 24.250290019s

• [SLOW TEST:42.941 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:33:28.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-df211d72-4d93-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume secrets
Feb 12 12:33:28.818: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-df2a9303-4d93-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-7rj8q" to be "success or failure"
Feb 12 12:33:28.844: INFO: Pod "pod-projected-secrets-df2a9303-4d93-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.758781ms
Feb 12 12:33:30.858: INFO: Pod "pod-projected-secrets-df2a9303-4d93-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039869833s
Feb 12 12:33:32.911: INFO: Pod "pod-projected-secrets-df2a9303-4d93-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093803981s
Feb 12 12:33:35.708: INFO: Pod "pod-projected-secrets-df2a9303-4d93-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.890501921s
Feb 12 12:33:37.724: INFO: Pod "pod-projected-secrets-df2a9303-4d93-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.905975383s
Feb 12 12:33:39.736: INFO: Pod "pod-projected-secrets-df2a9303-4d93-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.918694105s
STEP: Saw pod success
Feb 12 12:33:39.737: INFO: Pod "pod-projected-secrets-df2a9303-4d93-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:33:39.746: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-df2a9303-4d93-11ea-b4b9-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb 12 12:33:39.825: INFO: Waiting for pod pod-projected-secrets-df2a9303-4d93-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:33:39.833: INFO: Pod pod-projected-secrets-df2a9303-4d93-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:33:39.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7rj8q" for this suite.
Feb 12 12:33:45.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:33:46.011: INFO: namespace: e2e-tests-projected-7rj8q, resource: bindings, ignored listing per whitelist
Feb 12 12:33:46.168: INFO: namespace e2e-tests-projected-7rj8q deletion completed in 6.322755071s

• [SLOW TEST:17.606 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:33:46.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:33:56.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-pv9ft" for this suite.
Feb 12 12:34:02.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:34:02.787: INFO: namespace: e2e-tests-emptydir-wrapper-pv9ft, resource: bindings, ignored listing per whitelist
Feb 12 12:34:02.926: INFO: namespace e2e-tests-emptydir-wrapper-pv9ft deletion completed in 6.311939899s

• [SLOW TEST:16.758 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:34:02.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-pr942
Feb 12 12:34:13.287: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-pr942
STEP: checking the pod's current state and verifying that restartCount is present
Feb 12 12:34:13.297: INFO: Initial restart count of pod liveness-http is 0
Feb 12 12:34:35.663: INFO: Restart count of pod e2e-tests-container-probe-pr942/liveness-http is now 1 (22.365155818s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:34:35.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-pr942" for this suite.
Feb 12 12:34:41.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:34:41.966: INFO: namespace: e2e-tests-container-probe-pr942, resource: bindings, ignored listing per whitelist
Feb 12 12:34:42.093: INFO: namespace e2e-tests-container-probe-pr942 deletion completed in 6.332500586s

• [SLOW TEST:39.167 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:34:42.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb 12 12:34:55.162: INFO: Successfully updated pod "annotationupdate0b0b9e00-4d94-11ea-b4b9-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:34:57.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gwv66" for this suite.
Feb 12 12:35:21.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:35:21.418: INFO: namespace: e2e-tests-projected-gwv66, resource: bindings, ignored listing per whitelist
Feb 12 12:35:21.451: INFO: namespace e2e-tests-projected-gwv66 deletion completed in 24.197331323s

• [SLOW TEST:39.357 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:35:21.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 12 12:35:21.681: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 12 12:35:26.702: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:35:28.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-trzgl" for this suite.
Feb 12 12:35:36.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:35:36.944: INFO: namespace: e2e-tests-replication-controller-trzgl, resource: bindings, ignored listing per whitelist
Feb 12 12:35:36.988: INFO: namespace e2e-tests-replication-controller-trzgl deletion completed in 8.877874767s

• [SLOW TEST:15.536 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:35:36.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 12 12:35:38.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Feb 12 12:35:38.925: INFO: stderr: ""
Feb 12 12:35:38.925: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Feb 12 12:35:38.933: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:35:38.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zsdqf" for this suite.
Feb 12 12:35:45.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:35:45.431: INFO: namespace: e2e-tests-kubectl-zsdqf, resource: bindings, ignored listing per whitelist
Feb 12 12:35:45.829: INFO: namespace e2e-tests-kubectl-zsdqf deletion completed in 6.886705591s

S [SKIPPING] [8.841 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Feb 12 12:35:38.933: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:35:45.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 12 12:35:54.641: INFO: Successfully updated pod "pod-update-activedeadlineseconds-30f2fd54-4d94-11ea-b4b9-0242ac110005"
Feb 12 12:35:54.641: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-30f2fd54-4d94-11ea-b4b9-0242ac110005" in namespace "e2e-tests-pods-rbjrb" to be "terminated due to deadline exceeded"
Feb 12 12:35:54.656: INFO: Pod "pod-update-activedeadlineseconds-30f2fd54-4d94-11ea-b4b9-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 14.263024ms
Feb 12 12:35:56.669: INFO: Pod "pod-update-activedeadlineseconds-30f2fd54-4d94-11ea-b4b9-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.027441236s
Feb 12 12:35:56.669: INFO: Pod "pod-update-activedeadlineseconds-30f2fd54-4d94-11ea-b4b9-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:35:56.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-rbjrb" for this suite.
Feb 12 12:36:03.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:36:03.382: INFO: namespace: e2e-tests-pods-rbjrb, resource: bindings, ignored listing per whitelist
Feb 12 12:36:03.435: INFO: namespace e2e-tests-pods-rbjrb deletion completed in 6.760465807s

• [SLOW TEST:17.605 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:36:03.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-3b7d8c83-4d94-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume secrets
Feb 12 12:36:03.717: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3b7fda79-4d94-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-6ntn8" to be "success or failure"
Feb 12 12:36:03.773: INFO: Pod "pod-projected-secrets-3b7fda79-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 56.297398ms
Feb 12 12:36:05.823: INFO: Pod "pod-projected-secrets-3b7fda79-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106194387s
Feb 12 12:36:07.836: INFO: Pod "pod-projected-secrets-3b7fda79-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118895699s
Feb 12 12:36:10.097: INFO: Pod "pod-projected-secrets-3b7fda79-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.380335734s
Feb 12 12:36:12.116: INFO: Pod "pod-projected-secrets-3b7fda79-4d94-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.398945407s
STEP: Saw pod success
Feb 12 12:36:12.116: INFO: Pod "pod-projected-secrets-3b7fda79-4d94-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:36:12.121: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-3b7fda79-4d94-11ea-b4b9-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb 12 12:36:12.373: INFO: Waiting for pod pod-projected-secrets-3b7fda79-4d94-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:36:12.397: INFO: Pod pod-projected-secrets-3b7fda79-4d94-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:36:12.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6ntn8" for this suite.
Feb 12 12:36:18.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:36:18.617: INFO: namespace: e2e-tests-projected-6ntn8, resource: bindings, ignored listing per whitelist
Feb 12 12:36:18.685: INFO: namespace e2e-tests-projected-6ntn8 deletion completed in 6.2680455s

• [SLOW TEST:15.249 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:36:18.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:36:18.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-gsmz2" for this suite.
Feb 12 12:36:24.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:36:24.970: INFO: namespace: e2e-tests-services-gsmz2, resource: bindings, ignored listing per whitelist
Feb 12 12:36:25.099: INFO: namespace e2e-tests-services-gsmz2 deletion completed in 6.208593907s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.414 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:36:25.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Feb 12 12:36:25.968: INFO: created pod pod-service-account-defaultsa
Feb 12 12:36:25.968: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 12 12:36:25.991: INFO: created pod pod-service-account-mountsa
Feb 12 12:36:25.992: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 12 12:36:26.113: INFO: created pod pod-service-account-nomountsa
Feb 12 12:36:26.114: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 12 12:36:26.332: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 12 12:36:26.332: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 12 12:36:26.384: INFO: created pod pod-service-account-mountsa-mountspec
Feb 12 12:36:26.385: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 12 12:36:26.538: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 12 12:36:26.538: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 12 12:36:26.608: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 12 12:36:26.608: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 12 12:36:26.742: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 12 12:36:26.743: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 12 12:36:26.781: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 12 12:36:26.781: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:36:26.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-mc6bl" for this suite.
Feb 12 12:36:59.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:36:59.179: INFO: namespace: e2e-tests-svcaccounts-mc6bl, resource: bindings, ignored listing per whitelist
Feb 12 12:36:59.179: INFO: namespace e2e-tests-svcaccounts-mc6bl deletion completed in 32.377365917s

• [SLOW TEST:34.080 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:36:59.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-5cb8f3ac-4d94-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume secrets
Feb 12 12:36:59.450: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5cb9d493-4d94-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-f7bj8" to be "success or failure"
Feb 12 12:36:59.593: INFO: Pod "pod-projected-secrets-5cb9d493-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 143.07781ms
Feb 12 12:37:01.633: INFO: Pod "pod-projected-secrets-5cb9d493-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183009312s
Feb 12 12:37:03.771: INFO: Pod "pod-projected-secrets-5cb9d493-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320819206s
Feb 12 12:37:05.793: INFO: Pod "pod-projected-secrets-5cb9d493-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.343116529s
Feb 12 12:37:07.816: INFO: Pod "pod-projected-secrets-5cb9d493-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.36581568s
Feb 12 12:37:09.856: INFO: Pod "pod-projected-secrets-5cb9d493-4d94-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.406071893s
STEP: Saw pod success
Feb 12 12:37:09.856: INFO: Pod "pod-projected-secrets-5cb9d493-4d94-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:37:09.872: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-5cb9d493-4d94-11ea-b4b9-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Feb 12 12:37:10.064: INFO: Waiting for pod pod-projected-secrets-5cb9d493-4d94-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:37:10.080: INFO: Pod pod-projected-secrets-5cb9d493-4d94-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:37:10.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-f7bj8" for this suite.
Feb 12 12:37:16.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:37:16.274: INFO: namespace: e2e-tests-projected-f7bj8, resource: bindings, ignored listing per whitelist
Feb 12 12:37:16.381: INFO: namespace e2e-tests-projected-f7bj8 deletion completed in 6.287857997s

• [SLOW TEST:17.201 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:37:16.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Feb 12 12:37:16.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-f94mm run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 12 12:37:28.124: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0212 12:37:26.687943    3287 log.go:172] (0xc0008204d0) (0xc0007cabe0) Create stream\nI0212 12:37:26.688441    3287 log.go:172] (0xc0008204d0) (0xc0007cabe0) Stream added, broadcasting: 1\nI0212 12:37:26.711817    3287 log.go:172] (0xc0008204d0) Reply frame received for 1\nI0212 12:37:26.712015    3287 log.go:172] (0xc0008204d0) (0xc0007cac80) Create stream\nI0212 12:37:26.712040    3287 log.go:172] (0xc0008204d0) (0xc0007cac80) Stream added, broadcasting: 3\nI0212 12:37:26.717781    3287 log.go:172] (0xc0008204d0) Reply frame received for 3\nI0212 12:37:26.718022    3287 log.go:172] (0xc0008204d0) (0xc0008ce000) Create stream\nI0212 12:37:26.718072    3287 log.go:172] (0xc0008204d0) (0xc0008ce000) Stream added, broadcasting: 5\nI0212 12:37:26.720568    3287 log.go:172] (0xc0008204d0) Reply frame received for 5\nI0212 12:37:26.720596    3287 log.go:172] (0xc0008204d0) (0xc0007cad20) Create stream\nI0212 12:37:26.720610    3287 log.go:172] (0xc0008204d0) (0xc0007cad20) Stream added, broadcasting: 7\nI0212 12:37:26.724206    3287 log.go:172] (0xc0008204d0) Reply frame received for 7\nI0212 12:37:26.724659    3287 log.go:172] (0xc0007cac80) (3) Writing data frame\nI0212 12:37:26.725070    3287 log.go:172] (0xc0007cac80) (3) Writing data frame\nI0212 12:37:26.744532    3287 log.go:172] (0xc0008204d0) Data frame received for 5\nI0212 12:37:26.744597    3287 log.go:172] (0xc0008ce000) (5) Data frame handling\nI0212 12:37:26.744631    3287 log.go:172] (0xc0008ce000) (5) Data frame sent\nI0212 12:37:26.754164    3287 log.go:172] (0xc0008204d0) Data frame received for 5\nI0212 12:37:26.754194    3287 log.go:172] (0xc0008ce000) (5) Data frame handling\nI0212 12:37:26.754211    3287 log.go:172] (0xc0008ce000) (5) Data frame sent\nI0212 12:37:28.048672    3287 log.go:172] (0xc0008204d0) Data frame received for 1\nI0212 12:37:28.048859    3287 log.go:172] (0xc0008204d0) (0xc0007cac80) Stream removed, broadcasting: 3\nI0212 12:37:28.049068    3287 log.go:172] (0xc0007cabe0) (1) Data frame handling\nI0212 12:37:28.049100    3287 log.go:172] (0xc0007cabe0) (1) Data frame sent\nI0212 12:37:28.049123    3287 log.go:172] (0xc0008204d0) (0xc0008ce000) Stream removed, broadcasting: 5\nI0212 12:37:28.049231    3287 log.go:172] (0xc0008204d0) (0xc0007cad20) Stream removed, broadcasting: 7\nI0212 12:37:28.049285    3287 log.go:172] (0xc0008204d0) (0xc0007cabe0) Stream removed, broadcasting: 1\nI0212 12:37:28.049349    3287 log.go:172] (0xc0008204d0) Go away received\nI0212 12:37:28.049614    3287 log.go:172] (0xc0008204d0) (0xc0007cabe0) Stream removed, broadcasting: 1\nI0212 12:37:28.049640    3287 log.go:172] (0xc0008204d0) (0xc0007cac80) Stream removed, broadcasting: 3\nI0212 12:37:28.049658    3287 log.go:172] (0xc0008204d0) (0xc0008ce000) Stream removed, broadcasting: 5\nI0212 12:37:28.049673    3287 log.go:172] (0xc0008204d0) (0xc0007cad20) Stream removed, broadcasting: 7\n"
Feb 12 12:37:28.125: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:37:30.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-f94mm" for this suite.
Feb 12 12:37:36.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:37:36.671: INFO: namespace: e2e-tests-kubectl-f94mm, resource: bindings, ignored listing per whitelist
Feb 12 12:37:36.679: INFO: namespace e2e-tests-kubectl-f94mm deletion completed in 6.519889068s

• [SLOW TEST:20.297 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:37:36.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Feb 12 12:37:36.843: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb 12 12:37:36.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-x5clf'
Feb 12 12:37:37.369: INFO: stderr: ""
Feb 12 12:37:37.369: INFO: stdout: "service/redis-slave created\n"
Feb 12 12:37:37.371: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb 12 12:37:37.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-x5clf'
Feb 12 12:37:38.005: INFO: stderr: ""
Feb 12 12:37:38.005: INFO: stdout: "service/redis-master created\n"
Feb 12 12:37:38.007: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 12 12:37:38.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-x5clf'
Feb 12 12:37:38.501: INFO: stderr: ""
Feb 12 12:37:38.501: INFO: stdout: "service/frontend created\n"
Feb 12 12:37:38.504: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb 12 12:37:38.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-x5clf'
Feb 12 12:37:38.960: INFO: stderr: ""
Feb 12 12:37:38.960: INFO: stdout: "deployment.extensions/frontend created\n"
Feb 12 12:37:38.962: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 12 12:37:38.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-x5clf'
Feb 12 12:37:39.436: INFO: stderr: ""
Feb 12 12:37:39.436: INFO: stdout: "deployment.extensions/redis-master created\n"
Feb 12 12:37:39.437: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb 12 12:37:39.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-x5clf'
Feb 12 12:37:40.049: INFO: stderr: ""
Feb 12 12:37:40.049: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Feb 12 12:37:40.049: INFO: Waiting for all frontend pods to be Running.
Feb 12 12:38:10.103: INFO: Waiting for frontend to serve content.
Feb 12 12:38:10.458: INFO: Trying to add a new entry to the guestbook.
Feb 12 12:38:10.530: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb 12 12:38:10.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-x5clf'
Feb 12 12:38:10.980: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 12:38:10.980: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 12 12:38:10.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-x5clf'
Feb 12 12:38:11.280: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 12:38:11.280: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 12 12:38:11.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-x5clf'
Feb 12 12:38:11.638: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 12:38:11.639: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 12 12:38:11.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-x5clf'
Feb 12 12:38:11.793: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 12:38:11.793: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 12 12:38:11.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-x5clf'
Feb 12 12:38:12.285: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 12:38:12.285: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 12 12:38:12.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-x5clf'
Feb 12 12:38:12.720: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 12:38:12.720: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:38:12.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-x5clf" for this suite.
Feb 12 12:38:56.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:38:57.041: INFO: namespace: e2e-tests-kubectl-x5clf, resource: bindings, ignored listing per whitelist
Feb 12 12:38:57.083: INFO: namespace e2e-tests-kubectl-x5clf deletion completed in 44.338973192s

• [SLOW TEST:80.404 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:38:57.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 12 12:38:57.337: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 12 12:38:57.356: INFO: Waiting for terminating namespaces to be deleted...
Feb 12 12:38:57.361: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 12 12:38:57.382: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 12 12:38:57.382: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 12 12:38:57.382: INFO: 	Container coredns ready: true, restart count 0
Feb 12 12:38:57.382: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 12 12:38:57.382: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 12 12:38:57.382: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 12 12:38:57.382: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 12 12:38:57.382: INFO: 	Container weave ready: true, restart count 0
Feb 12 12:38:57.382: INFO: 	Container weave-npc ready: true, restart count 0
Feb 12 12:38:57.382: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 12 12:38:57.382: INFO: 	Container coredns ready: true, restart count 0
Feb 12 12:38:57.382: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 12 12:38:57.382: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Feb 12 12:38:57.542: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 12 12:38:57.542: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 12 12:38:57.542: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb 12 12:38:57.542: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Feb 12 12:38:57.542: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Feb 12 12:38:57.542: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb 12 12:38:57.542: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 12 12:38:57.542: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a31ec917-4d94-11ea-b4b9-0242ac110005.15f2a8001d04ec51], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-qq7q7/filler-pod-a31ec917-4d94-11ea-b4b9-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a31ec917-4d94-11ea-b4b9-0242ac110005.15f2a80167e7c5cc], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a31ec917-4d94-11ea-b4b9-0242ac110005.15f2a801f108e62a], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a31ec917-4d94-11ea-b4b9-0242ac110005.15f2a80225b8e3ee], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f2a80275d95950], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:39:08.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-qq7q7" for this suite.
Feb 12 12:39:15.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:39:15.312: INFO: namespace: e2e-tests-sched-pred-qq7q7, resource: bindings, ignored listing per whitelist
Feb 12 12:39:15.388: INFO: namespace e2e-tests-sched-pred-qq7q7 deletion completed in 6.376803019s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:18.304 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:39:15.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Feb 12 12:39:16.868: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:39:17.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jd9v8" for this suite.
Feb 12 12:39:23.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:39:23.275: INFO: namespace: e2e-tests-kubectl-jd9v8, resource: bindings, ignored listing per whitelist
Feb 12 12:39:23.459: INFO: namespace e2e-tests-kubectl-jd9v8 deletion completed in 6.396259101s

• [SLOW TEST:8.071 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:39:23.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Feb 12 12:39:23.807: INFO: Waiting up to 5m0s for pod "var-expansion-b2b7997b-4d94-11ea-b4b9-0242ac110005" in namespace "e2e-tests-var-expansion-4c8m2" to be "success or failure"
Feb 12 12:39:23.821: INFO: Pod "var-expansion-b2b7997b-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.649083ms
Feb 12 12:39:25.834: INFO: Pod "var-expansion-b2b7997b-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026022236s
Feb 12 12:39:27.847: INFO: Pod "var-expansion-b2b7997b-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039020855s
Feb 12 12:39:29.920: INFO: Pod "var-expansion-b2b7997b-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112891028s
Feb 12 12:39:31.942: INFO: Pod "var-expansion-b2b7997b-4d94-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.134837659s
STEP: Saw pod success
Feb 12 12:39:31.942: INFO: Pod "var-expansion-b2b7997b-4d94-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:39:31.957: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-b2b7997b-4d94-11ea-b4b9-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb 12 12:39:32.110: INFO: Waiting for pod var-expansion-b2b7997b-4d94-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:39:32.136: INFO: Pod var-expansion-b2b7997b-4d94-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:39:32.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-4c8m2" for this suite.
Feb 12 12:39:38.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:39:38.407: INFO: namespace: e2e-tests-var-expansion-4c8m2, resource: bindings, ignored listing per whitelist
Feb 12 12:39:38.481: INFO: namespace e2e-tests-var-expansion-4c8m2 deletion completed in 6.236232455s

• [SLOW TEST:15.022 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:39:38.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-bbb65c8b-4d94-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume secrets
Feb 12 12:39:38.826: INFO: Waiting up to 5m0s for pod "pod-secrets-bbb79151-4d94-11ea-b4b9-0242ac110005" in namespace "e2e-tests-secrets-nvdgp" to be "success or failure"
Feb 12 12:39:38.838: INFO: Pod "pod-secrets-bbb79151-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.850491ms
Feb 12 12:39:40.855: INFO: Pod "pod-secrets-bbb79151-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029504576s
Feb 12 12:39:42.874: INFO: Pod "pod-secrets-bbb79151-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047921662s
Feb 12 12:39:44.888: INFO: Pod "pod-secrets-bbb79151-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062480684s
Feb 12 12:39:46.902: INFO: Pod "pod-secrets-bbb79151-4d94-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075734852s
STEP: Saw pod success
Feb 12 12:39:46.902: INFO: Pod "pod-secrets-bbb79151-4d94-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:39:46.911: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-bbb79151-4d94-11ea-b4b9-0242ac110005 container secret-env-test: 
STEP: delete the pod
Feb 12 12:39:47.045: INFO: Waiting for pod pod-secrets-bbb79151-4d94-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:39:47.053: INFO: Pod pod-secrets-bbb79151-4d94-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:39:47.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-nvdgp" for this suite.
Feb 12 12:39:53.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:39:53.498: INFO: namespace: e2e-tests-secrets-nvdgp, resource: bindings, ignored listing per whitelist
Feb 12 12:39:53.543: INFO: namespace e2e-tests-secrets-nvdgp deletion completed in 6.480655264s

• [SLOW TEST:15.061 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:39:53.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 12 12:39:53.780: INFO: Waiting up to 5m0s for pod "downward-api-c49f2cc2-4d94-11ea-b4b9-0242ac110005" in namespace "e2e-tests-downward-api-jbbmx" to be "success or failure"
Feb 12 12:39:53.881: INFO: Pod "downward-api-c49f2cc2-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 101.088837ms
Feb 12 12:39:56.245: INFO: Pod "downward-api-c49f2cc2-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.465368397s
Feb 12 12:39:58.275: INFO: Pod "downward-api-c49f2cc2-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.494743028s
Feb 12 12:40:00.924: INFO: Pod "downward-api-c49f2cc2-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.144026306s
Feb 12 12:40:02.949: INFO: Pod "downward-api-c49f2cc2-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.169414722s
Feb 12 12:40:04.967: INFO: Pod "downward-api-c49f2cc2-4d94-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.187134886s
STEP: Saw pod success
Feb 12 12:40:04.967: INFO: Pod "downward-api-c49f2cc2-4d94-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:40:04.975: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-c49f2cc2-4d94-11ea-b4b9-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb 12 12:40:05.127: INFO: Waiting for pod downward-api-c49f2cc2-4d94-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:40:05.134: INFO: Pod downward-api-c49f2cc2-4d94-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:40:05.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jbbmx" for this suite.
Feb 12 12:40:13.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:40:13.564: INFO: namespace: e2e-tests-downward-api-jbbmx, resource: bindings, ignored listing per whitelist
Feb 12 12:40:13.663: INFO: namespace e2e-tests-downward-api-jbbmx deletion completed in 8.52265195s

• [SLOW TEST:20.120 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:40:13.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Feb 12 12:40:13.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 12 12:40:14.117: INFO: stderr: ""
Feb 12 12:40:14.117: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:40:14.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2pnq9" for this suite.
Feb 12 12:40:20.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:40:20.235: INFO: namespace: e2e-tests-kubectl-2pnq9, resource: bindings, ignored listing per whitelist
Feb 12 12:40:20.636: INFO: namespace e2e-tests-kubectl-2pnq9 deletion completed in 6.501207473s

• [SLOW TEST:6.973 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:40:20.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 12 12:40:20.905: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 12 12:40:20.976: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 12 12:40:25.993: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 12 12:40:32.033: INFO: Creating deployment "test-rolling-update-deployment"
Feb 12 12:40:32.205: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 12 12:40:32.237: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 12 12:40:34.292: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 12 12:40:34.322: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717108032, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717108032, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717108032, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717108032, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 12:40:36.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717108032, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717108032, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717108032, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717108032, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 12:40:38.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717108032, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717108032, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717108032, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717108032, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 12:40:40.333: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717108032, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717108032, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717108032, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717108032, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 12:40:42.599: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 12 12:40:42.635: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-sqr5m,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sqr5m/deployments/test-rolling-update-deployment,UID:db714076-4d94-11ea-a994-fa163e34d433,ResourceVersion:21424455,Generation:1,CreationTimestamp:2020-02-12 12:40:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-12 12:40:32 +0000 UTC 2020-02-12 12:40:32 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-12 12:40:41 +0000 UTC 2020-02-12 12:40:32 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 12 12:40:42.648: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-sqr5m,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sqr5m/replicasets/test-rolling-update-deployment-75db98fb4c,UID:dbb74600-4d94-11ea-a994-fa163e34d433,ResourceVersion:21424446,Generation:1,CreationTimestamp:2020-02-12 12:40:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment db714076-4d94-11ea-a994-fa163e34d433 0xc001275027 0xc001275028}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 12 12:40:42.648: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 12 12:40:42.648: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-sqr5m,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sqr5m/replicasets/test-rolling-update-controller,UID:d4cebb76-4d94-11ea-a994-fa163e34d433,ResourceVersion:21424454,Generation:2,CreationTimestamp:2020-02-12 12:40:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment db714076-4d94-11ea-a994-fa163e34d433 0xc001274de7 0xc001274de8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 12 12:40:42.655: INFO: Pod "test-rolling-update-deployment-75db98fb4c-rd544" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-rd544,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-sqr5m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-sqr5m/pods/test-rolling-update-deployment-75db98fb4c-rd544,UID:dbbc248f-4d94-11ea-a994-fa163e34d433,ResourceVersion:21424445,Generation:0,CreationTimestamp:2020-02-12 12:40:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c dbb74600-4d94-11ea-a994-fa163e34d433 0xc001574467 0xc001574468}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s6gxg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s6gxg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-s6gxg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015744d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015744f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 12:40:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 12:40:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 12:40:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 12:40:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-12 12:40:32 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-12 12:40:40 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://6da9b6e3a32d6991629cffdfa6d53d863557a63855a4bd315e9a3038dff46a2b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:40:42.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-sqr5m" for this suite.
Feb 12 12:40:50.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:40:51.072: INFO: namespace: e2e-tests-deployment-sqr5m, resource: bindings, ignored listing per whitelist
Feb 12 12:40:51.077: INFO: namespace e2e-tests-deployment-sqr5m deletion completed in 8.368489683s

• [SLOW TEST:30.440 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:40:51.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:41:05.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-98fts" for this suite.
Feb 12 12:41:29.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:41:29.663: INFO: namespace: e2e-tests-replication-controller-98fts, resource: bindings, ignored listing per whitelist
Feb 12 12:41:29.693: INFO: namespace e2e-tests-replication-controller-98fts deletion completed in 24.280377076s

• [SLOW TEST:38.616 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:41:29.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 12 12:41:29.922: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fde4af7c-4d94-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-jr75g" to be "success or failure"
Feb 12 12:41:29.931: INFO: Pod "downwardapi-volume-fde4af7c-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.489307ms
Feb 12 12:41:31.958: INFO: Pod "downwardapi-volume-fde4af7c-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035730593s
Feb 12 12:41:33.983: INFO: Pod "downwardapi-volume-fde4af7c-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060973979s
Feb 12 12:41:36.174: INFO: Pod "downwardapi-volume-fde4af7c-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.252345601s
Feb 12 12:41:38.188: INFO: Pod "downwardapi-volume-fde4af7c-4d94-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.265789605s
Feb 12 12:41:40.206: INFO: Pod "downwardapi-volume-fde4af7c-4d94-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.28453829s
STEP: Saw pod success
Feb 12 12:41:40.207: INFO: Pod "downwardapi-volume-fde4af7c-4d94-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:41:40.211: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fde4af7c-4d94-11ea-b4b9-0242ac110005 container client-container: 
STEP: delete the pod
Feb 12 12:41:40.295: INFO: Waiting for pod downwardapi-volume-fde4af7c-4d94-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:41:40.507: INFO: Pod downwardapi-volume-fde4af7c-4d94-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:41:40.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jr75g" for this suite.
Feb 12 12:41:48.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:41:49.458: INFO: namespace: e2e-tests-projected-jr75g, resource: bindings, ignored listing per whitelist
Feb 12 12:41:49.463: INFO: namespace e2e-tests-projected-jr75g deletion completed in 8.939071869s

• [SLOW TEST:19.769 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:41:49.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-09b7aebf-4d95-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb 12 12:41:49.722: INFO: Waiting up to 5m0s for pod "pod-configmaps-09ba0759-4d95-11ea-b4b9-0242ac110005" in namespace "e2e-tests-configmap-7snwg" to be "success or failure"
Feb 12 12:41:49.732: INFO: Pod "pod-configmaps-09ba0759-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.167071ms
Feb 12 12:41:51.752: INFO: Pod "pod-configmaps-09ba0759-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029407238s
Feb 12 12:41:53.773: INFO: Pod "pod-configmaps-09ba0759-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05065834s
Feb 12 12:41:55.800: INFO: Pod "pod-configmaps-09ba0759-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077757334s
Feb 12 12:41:57.826: INFO: Pod "pod-configmaps-09ba0759-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103515256s
Feb 12 12:42:00.290: INFO: Pod "pod-configmaps-09ba0759-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.568133557s
Feb 12 12:42:02.314: INFO: Pod "pod-configmaps-09ba0759-4d95-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.591288604s
STEP: Saw pod success
Feb 12 12:42:02.314: INFO: Pod "pod-configmaps-09ba0759-4d95-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:42:02.326: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-09ba0759-4d95-11ea-b4b9-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb 12 12:42:02.597: INFO: Waiting for pod pod-configmaps-09ba0759-4d95-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:42:02.624: INFO: Pod pod-configmaps-09ba0759-4d95-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:42:02.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-7snwg" for this suite.
Feb 12 12:42:08.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:42:09.158: INFO: namespace: e2e-tests-configmap-7snwg, resource: bindings, ignored listing per whitelist
Feb 12 12:42:09.206: INFO: namespace e2e-tests-configmap-7snwg deletion completed in 6.547190136s

• [SLOW TEST:19.743 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:42:09.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-1581ab96-4d95-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb 12 12:42:09.473: INFO: Waiting up to 5m0s for pod "pod-configmaps-15831d66-4d95-11ea-b4b9-0242ac110005" in namespace "e2e-tests-configmap-hjq96" to be "success or failure"
Feb 12 12:42:09.488: INFO: Pod "pod-configmaps-15831d66-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.956237ms
Feb 12 12:42:11.775: INFO: Pod "pod-configmaps-15831d66-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302092172s
Feb 12 12:42:13.832: INFO: Pod "pod-configmaps-15831d66-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.358538953s
Feb 12 12:42:16.017: INFO: Pod "pod-configmaps-15831d66-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.543789768s
Feb 12 12:42:18.031: INFO: Pod "pod-configmaps-15831d66-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.557799257s
Feb 12 12:42:20.043: INFO: Pod "pod-configmaps-15831d66-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.570372706s
Feb 12 12:42:22.060: INFO: Pod "pod-configmaps-15831d66-4d95-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.587292991s
STEP: Saw pod success
Feb 12 12:42:22.061: INFO: Pod "pod-configmaps-15831d66-4d95-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:42:22.134: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-15831d66-4d95-11ea-b4b9-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb 12 12:42:22.221: INFO: Waiting for pod pod-configmaps-15831d66-4d95-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:42:22.328: INFO: Pod pod-configmaps-15831d66-4d95-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:42:22.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-hjq96" for this suite.
Feb 12 12:42:28.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:42:28.532: INFO: namespace: e2e-tests-configmap-hjq96, resource: bindings, ignored listing per whitelist
Feb 12 12:42:28.650: INFO: namespace e2e-tests-configmap-hjq96 deletion completed in 6.304767824s

• [SLOW TEST:19.444 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:42:28.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Feb 12 12:42:28.854: INFO: Waiting up to 5m0s for pod "client-containers-21094013-4d95-11ea-b4b9-0242ac110005" in namespace "e2e-tests-containers-xgf6l" to be "success or failure"
Feb 12 12:42:28.867: INFO: Pod "client-containers-21094013-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.202582ms
Feb 12 12:42:30.890: INFO: Pod "client-containers-21094013-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035481947s
Feb 12 12:42:32.914: INFO: Pod "client-containers-21094013-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059625901s
Feb 12 12:42:34.937: INFO: Pod "client-containers-21094013-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08259928s
Feb 12 12:42:36.951: INFO: Pod "client-containers-21094013-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096947301s
Feb 12 12:42:38.974: INFO: Pod "client-containers-21094013-4d95-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11956382s
STEP: Saw pod success
Feb 12 12:42:38.974: INFO: Pod "client-containers-21094013-4d95-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:42:38.981: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-21094013-4d95-11ea-b4b9-0242ac110005 container test-container: 
STEP: delete the pod
Feb 12 12:42:39.055: INFO: Waiting for pod client-containers-21094013-4d95-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:42:39.067: INFO: Pod client-containers-21094013-4d95-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:42:39.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-xgf6l" for this suite.
Feb 12 12:42:45.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:42:45.718: INFO: namespace: e2e-tests-containers-xgf6l, resource: bindings, ignored listing per whitelist
Feb 12 12:42:45.823: INFO: namespace e2e-tests-containers-xgf6l deletion completed in 6.737658509s

• [SLOW TEST:17.172 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:42:45.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 12 12:42:46.021: INFO: Waiting up to 5m0s for pod "pod-2b4b111c-4d95-11ea-b4b9-0242ac110005" in namespace "e2e-tests-emptydir-zwj7w" to be "success or failure"
Feb 12 12:42:46.037: INFO: Pod "pod-2b4b111c-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.173007ms
Feb 12 12:42:48.048: INFO: Pod "pod-2b4b111c-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026637464s
Feb 12 12:42:50.061: INFO: Pod "pod-2b4b111c-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03969389s
Feb 12 12:42:52.394: INFO: Pod "pod-2b4b111c-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.372811995s
Feb 12 12:42:54.409: INFO: Pod "pod-2b4b111c-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.387791072s
Feb 12 12:42:56.426: INFO: Pod "pod-2b4b111c-4d95-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.404700527s
STEP: Saw pod success
Feb 12 12:42:56.426: INFO: Pod "pod-2b4b111c-4d95-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:42:56.431: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-2b4b111c-4d95-11ea-b4b9-0242ac110005 container test-container: 
STEP: delete the pod
Feb 12 12:42:56.549: INFO: Waiting for pod pod-2b4b111c-4d95-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:42:56.574: INFO: Pod pod-2b4b111c-4d95-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:42:56.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-zwj7w" for this suite.
Feb 12 12:43:02.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:43:02.878: INFO: namespace: e2e-tests-emptydir-zwj7w, resource: bindings, ignored listing per whitelist
Feb 12 12:43:02.907: INFO: namespace e2e-tests-emptydir-zwj7w deletion completed in 6.318468705s

• [SLOW TEST:17.083 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:43:02.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 12 12:43:03.095: INFO: Waiting up to 5m0s for pod "pod-3578f950-4d95-11ea-b4b9-0242ac110005" in namespace "e2e-tests-emptydir-rdp7l" to be "success or failure"
Feb 12 12:43:03.177: INFO: Pod "pod-3578f950-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 81.375452ms
Feb 12 12:43:05.200: INFO: Pod "pod-3578f950-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104593002s
Feb 12 12:43:07.212: INFO: Pod "pod-3578f950-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116829629s
Feb 12 12:43:09.226: INFO: Pod "pod-3578f950-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130923901s
Feb 12 12:43:11.241: INFO: Pod "pod-3578f950-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145175655s
Feb 12 12:43:13.255: INFO: Pod "pod-3578f950-4d95-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.159883713s
STEP: Saw pod success
Feb 12 12:43:13.255: INFO: Pod "pod-3578f950-4d95-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:43:13.262: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3578f950-4d95-11ea-b4b9-0242ac110005 container test-container: 
STEP: delete the pod
Feb 12 12:43:14.028: INFO: Waiting for pod pod-3578f950-4d95-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:43:14.622: INFO: Pod pod-3578f950-4d95-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:43:14.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-rdp7l" for this suite.
Feb 12 12:43:20.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:43:21.003: INFO: namespace: e2e-tests-emptydir-rdp7l, resource: bindings, ignored listing per whitelist
Feb 12 12:43:21.089: INFO: namespace e2e-tests-emptydir-rdp7l deletion completed in 6.433445379s

• [SLOW TEST:18.182 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:43:21.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-dpl9d/configmap-test-40585fdd-4d95-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb 12 12:43:21.336: INFO: Waiting up to 5m0s for pod "pod-configmaps-405902b4-4d95-11ea-b4b9-0242ac110005" in namespace "e2e-tests-configmap-dpl9d" to be "success or failure"
Feb 12 12:43:21.346: INFO: Pod "pod-configmaps-405902b4-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.46792ms
Feb 12 12:43:23.454: INFO: Pod "pod-configmaps-405902b4-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117862437s
Feb 12 12:43:25.499: INFO: Pod "pod-configmaps-405902b4-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163175772s
Feb 12 12:43:27.510: INFO: Pod "pod-configmaps-405902b4-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.173696648s
Feb 12 12:43:29.521: INFO: Pod "pod-configmaps-405902b4-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.185271348s
Feb 12 12:43:31.545: INFO: Pod "pod-configmaps-405902b4-4d95-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.208784851s
STEP: Saw pod success
Feb 12 12:43:31.545: INFO: Pod "pod-configmaps-405902b4-4d95-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:43:31.560: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-405902b4-4d95-11ea-b4b9-0242ac110005 container env-test: 
STEP: delete the pod
Feb 12 12:43:32.558: INFO: Waiting for pod pod-configmaps-405902b4-4d95-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:43:32.672: INFO: Pod pod-configmaps-405902b4-4d95-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:43:32.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-dpl9d" for this suite.
Feb 12 12:43:38.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:43:38.989: INFO: namespace: e2e-tests-configmap-dpl9d, resource: bindings, ignored listing per whitelist
Feb 12 12:43:39.109: INFO: namespace e2e-tests-configmap-dpl9d deletion completed in 6.414013663s

• [SLOW TEST:18.019 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:43:39.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-4b189d61-4d95-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb 12 12:43:39.528: INFO: Waiting up to 5m0s for pod "pod-configmaps-4b1a3b92-4d95-11ea-b4b9-0242ac110005" in namespace "e2e-tests-configmap-742h2" to be "success or failure"
Feb 12 12:43:39.541: INFO: Pod "pod-configmaps-4b1a3b92-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.187199ms
Feb 12 12:43:41.561: INFO: Pod "pod-configmaps-4b1a3b92-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032535537s
Feb 12 12:43:43.724: INFO: Pod "pod-configmaps-4b1a3b92-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195728175s
Feb 12 12:43:46.280: INFO: Pod "pod-configmaps-4b1a3b92-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.751832477s
Feb 12 12:43:48.350: INFO: Pod "pod-configmaps-4b1a3b92-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.822138004s
Feb 12 12:43:50.425: INFO: Pod "pod-configmaps-4b1a3b92-4d95-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.896828007s
STEP: Saw pod success
Feb 12 12:43:50.425: INFO: Pod "pod-configmaps-4b1a3b92-4d95-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:43:50.575: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-4b1a3b92-4d95-11ea-b4b9-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb 12 12:43:50.827: INFO: Waiting for pod pod-configmaps-4b1a3b92-4d95-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:43:50.960: INFO: Pod pod-configmaps-4b1a3b92-4d95-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:43:50.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-742h2" for this suite.
Feb 12 12:43:57.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:43:57.359: INFO: namespace: e2e-tests-configmap-742h2, resource: bindings, ignored listing per whitelist
Feb 12 12:43:57.432: INFO: namespace e2e-tests-configmap-742h2 deletion completed in 6.40423245s

• [SLOW TEST:18.321 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:43:57.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 12 12:43:57.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-2hlpz'
Feb 12 12:44:00.815: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 12 12:44:00.816: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb 12 12:44:03.269: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-8nncs]
Feb 12 12:44:03.269: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-8nncs" in namespace "e2e-tests-kubectl-2hlpz" to be "running and ready"
Feb 12 12:44:03.279: INFO: Pod "e2e-test-nginx-rc-8nncs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.452241ms
Feb 12 12:44:05.344: INFO: Pod "e2e-test-nginx-rc-8nncs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075280841s
Feb 12 12:44:07.359: INFO: Pod "e2e-test-nginx-rc-8nncs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090488375s
Feb 12 12:44:09.414: INFO: Pod "e2e-test-nginx-rc-8nncs": Phase="Running", Reason="", readiness=true. Elapsed: 6.145485118s
Feb 12 12:44:09.415: INFO: Pod "e2e-test-nginx-rc-8nncs" satisfied condition "running and ready"
Feb 12 12:44:09.415: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-8nncs]
Feb 12 12:44:09.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-2hlpz'
Feb 12 12:44:09.659: INFO: stderr: ""
Feb 12 12:44:09.659: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Feb 12 12:44:09.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-2hlpz'
Feb 12 12:44:09.815: INFO: stderr: ""
Feb 12 12:44:09.816: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:44:09.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2hlpz" for this suite.
Feb 12 12:44:35.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:44:35.976: INFO: namespace: e2e-tests-kubectl-2hlpz, resource: bindings, ignored listing per whitelist
Feb 12 12:44:36.103: INFO: namespace e2e-tests-kubectl-2hlpz deletion completed in 26.275560887s

• [SLOW TEST:38.671 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:44:36.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 12 12:44:36.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-rwltf'
Feb 12 12:44:37.222: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 12 12:44:37.222: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Feb 12 12:44:37.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-rwltf'
Feb 12 12:44:37.492: INFO: stderr: ""
Feb 12 12:44:37.492: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:44:37.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rwltf" for this suite.
Feb 12 12:44:43.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:44:43.781: INFO: namespace: e2e-tests-kubectl-rwltf, resource: bindings, ignored listing per whitelist
Feb 12 12:44:44.063: INFO: namespace e2e-tests-kubectl-rwltf deletion completed in 6.506904063s

• [SLOW TEST:7.959 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:44:44.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 12 12:44:44.344: INFO: Waiting up to 5m0s for pod "downward-api-71ceec48-4d95-11ea-b4b9-0242ac110005" in namespace "e2e-tests-downward-api-6m4dt" to be "success or failure"
Feb 12 12:44:44.358: INFO: Pod "downward-api-71ceec48-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.798559ms
Feb 12 12:44:46.379: INFO: Pod "downward-api-71ceec48-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034527074s
Feb 12 12:44:48.392: INFO: Pod "downward-api-71ceec48-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047193723s
Feb 12 12:44:50.696: INFO: Pod "downward-api-71ceec48-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.35127634s
Feb 12 12:44:52.713: INFO: Pod "downward-api-71ceec48-4d95-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.368685966s
Feb 12 12:44:54.728: INFO: Pod "downward-api-71ceec48-4d95-11ea-b4b9-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.383172932s
Feb 12 12:44:56.743: INFO: Pod "downward-api-71ceec48-4d95-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.398763554s
STEP: Saw pod success
Feb 12 12:44:56.743: INFO: Pod "downward-api-71ceec48-4d95-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:44:56.747: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-71ceec48-4d95-11ea-b4b9-0242ac110005 container dapi-container: 
STEP: delete the pod
Feb 12 12:44:56.977: INFO: Waiting for pod downward-api-71ceec48-4d95-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:44:56.985: INFO: Pod downward-api-71ceec48-4d95-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:44:56.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6m4dt" for this suite.
Feb 12 12:45:03.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:45:03.144: INFO: namespace: e2e-tests-downward-api-6m4dt, resource: bindings, ignored listing per whitelist
Feb 12 12:45:03.191: INFO: namespace e2e-tests-downward-api-6m4dt deletion completed in 6.194782227s

• [SLOW TEST:19.127 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:45:03.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-crns
STEP: Creating a pod to test atomic-volume-subpath
Feb 12 12:45:03.749: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-crns" in namespace "e2e-tests-subpath-hk8pr" to be "success or failure"
Feb 12 12:45:03.841: INFO: Pod "pod-subpath-test-secret-crns": Phase="Pending", Reason="", readiness=false. Elapsed: 91.566309ms
Feb 12 12:45:05.949: INFO: Pod "pod-subpath-test-secret-crns": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199515596s
Feb 12 12:45:07.980: INFO: Pod "pod-subpath-test-secret-crns": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23098986s
Feb 12 12:45:09.989: INFO: Pod "pod-subpath-test-secret-crns": Phase="Pending", Reason="", readiness=false. Elapsed: 6.240428355s
Feb 12 12:45:12.055: INFO: Pod "pod-subpath-test-secret-crns": Phase="Pending", Reason="", readiness=false. Elapsed: 8.306147773s
Feb 12 12:45:14.079: INFO: Pod "pod-subpath-test-secret-crns": Phase="Pending", Reason="", readiness=false. Elapsed: 10.329966746s
Feb 12 12:45:16.178: INFO: Pod "pod-subpath-test-secret-crns": Phase="Pending", Reason="", readiness=false. Elapsed: 12.42893023s
Feb 12 12:45:18.198: INFO: Pod "pod-subpath-test-secret-crns": Phase="Pending", Reason="", readiness=false. Elapsed: 14.44860498s
Feb 12 12:45:20.217: INFO: Pod "pod-subpath-test-secret-crns": Phase="Pending", Reason="", readiness=false. Elapsed: 16.468178503s
Feb 12 12:45:22.233: INFO: Pod "pod-subpath-test-secret-crns": Phase="Running", Reason="", readiness=false. Elapsed: 18.483962366s
Feb 12 12:45:24.252: INFO: Pod "pod-subpath-test-secret-crns": Phase="Running", Reason="", readiness=false. Elapsed: 20.50307848s
Feb 12 12:45:26.292: INFO: Pod "pod-subpath-test-secret-crns": Phase="Running", Reason="", readiness=false. Elapsed: 22.542659477s
Feb 12 12:45:28.310: INFO: Pod "pod-subpath-test-secret-crns": Phase="Running", Reason="", readiness=false. Elapsed: 24.56079573s
Feb 12 12:45:30.336: INFO: Pod "pod-subpath-test-secret-crns": Phase="Running", Reason="", readiness=false. Elapsed: 26.58697367s
Feb 12 12:45:32.354: INFO: Pod "pod-subpath-test-secret-crns": Phase="Running", Reason="", readiness=false. Elapsed: 28.604600179s
Feb 12 12:45:34.373: INFO: Pod "pod-subpath-test-secret-crns": Phase="Running", Reason="", readiness=false. Elapsed: 30.623565002s
Feb 12 12:45:36.422: INFO: Pod "pod-subpath-test-secret-crns": Phase="Running", Reason="", readiness=false. Elapsed: 32.672763124s
Feb 12 12:45:38.437: INFO: Pod "pod-subpath-test-secret-crns": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.687750807s
STEP: Saw pod success
Feb 12 12:45:38.437: INFO: Pod "pod-subpath-test-secret-crns" satisfied condition "success or failure"
Feb 12 12:45:38.448: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-crns container test-container-subpath-secret-crns: 
STEP: delete the pod
Feb 12 12:45:38.558: INFO: Waiting for pod pod-subpath-test-secret-crns to disappear
Feb 12 12:45:38.592: INFO: Pod pod-subpath-test-secret-crns no longer exists
STEP: Deleting pod pod-subpath-test-secret-crns
Feb 12 12:45:38.592: INFO: Deleting pod "pod-subpath-test-secret-crns" in namespace "e2e-tests-subpath-hk8pr"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:45:38.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-hk8pr" for this suite.
Feb 12 12:45:44.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:45:44.854: INFO: namespace: e2e-tests-subpath-hk8pr, resource: bindings, ignored listing per whitelist
Feb 12 12:45:45.312: INFO: namespace e2e-tests-subpath-hk8pr deletion completed in 6.621477193s

• [SLOW TEST:42.120 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:45:45.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-nd2cw
Feb 12 12:45:55.565: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-nd2cw
STEP: checking the pod's current state and verifying that restartCount is present
Feb 12 12:45:55.574: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:49:56.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-nd2cw" for this suite.
Feb 12 12:50:02.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:50:02.836: INFO: namespace: e2e-tests-container-probe-nd2cw, resource: bindings, ignored listing per whitelist
Feb 12 12:50:02.845: INFO: namespace e2e-tests-container-probe-nd2cw deletion completed in 6.355591373s

• [SLOW TEST:257.532 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:50:02.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 12 12:50:03.055: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9s6bk,SelfLink:/api/v1/namespaces/e2e-tests-watch-9s6bk/configmaps/e2e-watch-test-watch-closed,UID:2fc8b839-4d96-11ea-a994-fa163e34d433,ResourceVersion:21425515,Generation:0,CreationTimestamp:2020-02-12 12:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 12 12:50:03.055: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9s6bk,SelfLink:/api/v1/namespaces/e2e-tests-watch-9s6bk/configmaps/e2e-watch-test-watch-closed,UID:2fc8b839-4d96-11ea-a994-fa163e34d433,ResourceVersion:21425516,Generation:0,CreationTimestamp:2020-02-12 12:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 12 12:50:03.071: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9s6bk,SelfLink:/api/v1/namespaces/e2e-tests-watch-9s6bk/configmaps/e2e-watch-test-watch-closed,UID:2fc8b839-4d96-11ea-a994-fa163e34d433,ResourceVersion:21425517,Generation:0,CreationTimestamp:2020-02-12 12:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 12 12:50:03.071: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9s6bk,SelfLink:/api/v1/namespaces/e2e-tests-watch-9s6bk/configmaps/e2e-watch-test-watch-closed,UID:2fc8b839-4d96-11ea-a994-fa163e34d433,ResourceVersion:21425518,Generation:0,CreationTimestamp:2020-02-12 12:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:50:03.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-9s6bk" for this suite.
Feb 12 12:50:09.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:50:09.207: INFO: namespace: e2e-tests-watch-9s6bk, resource: bindings, ignored listing per whitelist
Feb 12 12:50:09.391: INFO: namespace e2e-tests-watch-9s6bk deletion completed in 6.302375921s

• [SLOW TEST:6.546 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:50:09.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-33afb949-4d96-11ea-b4b9-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-33afba39-4d96-11ea-b4b9-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-33afb949-4d96-11ea-b4b9-0242ac110005
STEP: Updating configmap cm-test-opt-upd-33afba39-4d96-11ea-b4b9-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-33afbb4b-4d96-11ea-b4b9-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:51:56.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-h78gj" for this suite.
Feb 12 12:52:20.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:52:20.402: INFO: namespace: e2e-tests-configmap-h78gj, resource: bindings, ignored listing per whitelist
Feb 12 12:52:20.540: INFO: namespace e2e-tests-configmap-h78gj deletion completed in 24.211074336s

• [SLOW TEST:131.148 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:52:20.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 12 12:52:20.989: INFO: Number of nodes with available pods: 0
Feb 12 12:52:20.990: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:22.018: INFO: Number of nodes with available pods: 0
Feb 12 12:52:22.018: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:23.008: INFO: Number of nodes with available pods: 0
Feb 12 12:52:23.008: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:24.055: INFO: Number of nodes with available pods: 0
Feb 12 12:52:24.056: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:25.032: INFO: Number of nodes with available pods: 0
Feb 12 12:52:25.032: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:27.773: INFO: Number of nodes with available pods: 0
Feb 12 12:52:27.773: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:28.491: INFO: Number of nodes with available pods: 0
Feb 12 12:52:28.491: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:29.623: INFO: Number of nodes with available pods: 0
Feb 12 12:52:29.623: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:30.004: INFO: Number of nodes with available pods: 0
Feb 12 12:52:30.004: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:31.015: INFO: Number of nodes with available pods: 0
Feb 12 12:52:31.015: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:32.039: INFO: Number of nodes with available pods: 1
Feb 12 12:52:32.039: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 12 12:52:32.116: INFO: Number of nodes with available pods: 0
Feb 12 12:52:32.116: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:34.726: INFO: Number of nodes with available pods: 0
Feb 12 12:52:34.726: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:35.797: INFO: Number of nodes with available pods: 0
Feb 12 12:52:35.797: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:36.143: INFO: Number of nodes with available pods: 0
Feb 12 12:52:36.143: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:37.453: INFO: Number of nodes with available pods: 0
Feb 12 12:52:37.453: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:38.135: INFO: Number of nodes with available pods: 0
Feb 12 12:52:38.135: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:39.138: INFO: Number of nodes with available pods: 0
Feb 12 12:52:39.138: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:40.962: INFO: Number of nodes with available pods: 0
Feb 12 12:52:40.962: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:41.164: INFO: Number of nodes with available pods: 0
Feb 12 12:52:41.164: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:42.198: INFO: Number of nodes with available pods: 0
Feb 12 12:52:42.198: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:43.207: INFO: Number of nodes with available pods: 0
Feb 12 12:52:43.207: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 12 12:52:44.133: INFO: Number of nodes with available pods: 1
Feb 12 12:52:44.133: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-clbtb, will wait for the garbage collector to delete the pods
Feb 12 12:52:44.254: INFO: Deleting DaemonSet.extensions daemon-set took: 60.93551ms
Feb 12 12:52:44.355: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.473676ms
Feb 12 12:52:53.647: INFO: Number of nodes with available pods: 0
Feb 12 12:52:53.648: INFO: Number of running nodes: 0, number of available pods: 0
Feb 12 12:52:53.658: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-clbtb/daemonsets","resourceVersion":"21425809"},"items":null}

Feb 12 12:52:53.668: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-clbtb/pods","resourceVersion":"21425809"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:52:53.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-clbtb" for this suite.
Feb 12 12:53:01.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:53:01.862: INFO: namespace: e2e-tests-daemonsets-clbtb, resource: bindings, ignored listing per whitelist
Feb 12 12:53:01.919: INFO: namespace e2e-tests-daemonsets-clbtb deletion completed in 8.230696815s

• [SLOW TEST:41.376 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:53:01.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 12 12:53:02.136: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 25.201337ms)
Feb 12 12:53:02.150: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.127794ms)
Feb 12 12:53:02.165: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.858083ms)
Feb 12 12:53:02.174: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.659048ms)
Feb 12 12:53:02.180: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.878388ms)
Feb 12 12:53:02.187: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.152997ms)
Feb 12 12:53:02.193: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.541033ms)
Feb 12 12:53:02.246: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 52.646096ms)
Feb 12 12:53:02.252: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.686943ms)
Feb 12 12:53:02.258: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.078235ms)
Feb 12 12:53:02.265: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.130945ms)
Feb 12 12:53:02.270: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.351061ms)
Feb 12 12:53:02.278: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.951386ms)
Feb 12 12:53:02.289: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.243093ms)
Feb 12 12:53:02.295: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.803918ms)
Feb 12 12:53:02.300: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.719582ms)
Feb 12 12:53:02.305: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.612862ms)
Feb 12 12:53:02.309: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.59329ms)
Feb 12 12:53:02.315: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.038259ms)
Feb 12 12:53:02.320: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.073215ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:53:02.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-gnhdw" for this suite.
Feb 12 12:53:08.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:53:08.491: INFO: namespace: e2e-tests-proxy-gnhdw, resource: bindings, ignored listing per whitelist
Feb 12 12:53:08.704: INFO: namespace e2e-tests-proxy-gnhdw deletion completed in 6.379492814s

• [SLOW TEST:6.785 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:53:08.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 12 12:53:08.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:53:19.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-8ftpp" for this suite.
Feb 12 12:54:03.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:54:03.813: INFO: namespace: e2e-tests-pods-8ftpp, resource: bindings, ignored listing per whitelist
Feb 12 12:54:04.043: INFO: namespace e2e-tests-pods-8ftpp deletion completed in 44.603362151s

• [SLOW TEST:55.336 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:54:04.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:54:14.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-w2dzw" for this suite.
Feb 12 12:55:14.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:55:14.753: INFO: namespace: e2e-tests-kubelet-test-w2dzw, resource: bindings, ignored listing per whitelist
Feb 12 12:55:14.850: INFO: namespace e2e-tests-kubelet-test-w2dzw deletion completed in 1m0.233063339s

• [SLOW TEST:70.805 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:55:14.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:55:15.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-xw7ck" for this suite.
Feb 12 12:55:39.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:55:39.661: INFO: namespace: e2e-tests-pods-xw7ck, resource: bindings, ignored listing per whitelist
Feb 12 12:55:39.682: INFO: namespace e2e-tests-pods-xw7ck deletion completed in 24.331559585s

• [SLOW TEST:24.831 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:55:39.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 12 12:55:39.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:55:50.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-rmzl9" for this suite.
Feb 12 12:56:32.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:56:32.418: INFO: namespace: e2e-tests-pods-rmzl9, resource: bindings, ignored listing per whitelist
Feb 12 12:56:32.626: INFO: namespace e2e-tests-pods-rmzl9 deletion completed in 42.38043045s

• [SLOW TEST:52.943 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:56:32.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 12 12:56:32.838: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 12 12:56:32.853: INFO: Waiting for terminating namespaces to be deleted...
Feb 12 12:56:32.869: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 12 12:56:32.972: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 12 12:56:32.973: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 12 12:56:32.973: INFO: 	Container coredns ready: true, restart count 0
Feb 12 12:56:32.973: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 12 12:56:32.973: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 12 12:56:32.973: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 12 12:56:32.973: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 12 12:56:32.973: INFO: 	Container weave ready: true, restart count 0
Feb 12 12:56:32.973: INFO: 	Container weave-npc ready: true, restart count 0
Feb 12 12:56:32.973: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 12 12:56:32.973: INFO: 	Container coredns ready: true, restart count 0
Feb 12 12:56:32.973: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 12 12:56:32.973: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-21e07af3-4d97-11ea-b4b9-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-21e07af3-4d97-11ea-b4b9-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-21e07af3-4d97-11ea-b4b9-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:57:03.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-dg86l" for this suite.
Feb 12 12:57:25.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:57:25.995: INFO: namespace: e2e-tests-sched-pred-dg86l, resource: bindings, ignored listing per whitelist
Feb 12 12:57:26.041: INFO: namespace e2e-tests-sched-pred-dg86l deletion completed in 22.245610659s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:53.415 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:57:26.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 12 12:57:26.543: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Feb 12 12:57:26.559: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-pghdw/daemonsets","resourceVersion":"21426288"},"items":null}

Feb 12 12:57:26.563: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-pghdw/pods","resourceVersion":"21426288"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:57:26.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-pghdw" for this suite.
Feb 12 12:57:32.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:57:32.726: INFO: namespace: e2e-tests-daemonsets-pghdw, resource: bindings, ignored listing per whitelist
Feb 12 12:57:32.777: INFO: namespace e2e-tests-daemonsets-pghdw deletion completed in 6.194855974s

S [SKIPPING] [6.736 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Feb 12 12:57:26.543: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:57:32.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 12 12:57:33.094: INFO: Waiting up to 5m0s for pod "pod-3c08d3c5-4d97-11ea-b4b9-0242ac110005" in namespace "e2e-tests-emptydir-2fp96" to be "success or failure"
Feb 12 12:57:33.131: INFO: Pod "pod-3c08d3c5-4d97-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 36.279858ms
Feb 12 12:57:35.219: INFO: Pod "pod-3c08d3c5-4d97-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124899714s
Feb 12 12:57:37.244: INFO: Pod "pod-3c08d3c5-4d97-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149722553s
Feb 12 12:57:39.818: INFO: Pod "pod-3c08d3c5-4d97-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.723444951s
Feb 12 12:57:41.835: INFO: Pod "pod-3c08d3c5-4d97-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.740934356s
Feb 12 12:57:44.459: INFO: Pod "pod-3c08d3c5-4d97-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.364792821s
Feb 12 12:57:46.500: INFO: Pod "pod-3c08d3c5-4d97-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.405330074s
STEP: Saw pod success
Feb 12 12:57:46.500: INFO: Pod "pod-3c08d3c5-4d97-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 12:57:46.509: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3c08d3c5-4d97-11ea-b4b9-0242ac110005 container test-container: 
STEP: delete the pod
Feb 12 12:57:47.482: INFO: Waiting for pod pod-3c08d3c5-4d97-11ea-b4b9-0242ac110005 to disappear
Feb 12 12:57:47.494: INFO: Pod pod-3c08d3c5-4d97-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:57:47.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2fp96" for this suite.
Feb 12 12:57:53.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:57:54.028: INFO: namespace: e2e-tests-emptydir-2fp96, resource: bindings, ignored listing per whitelist
Feb 12 12:57:54.234: INFO: namespace e2e-tests-emptydir-2fp96 deletion completed in 6.731997367s

• [SLOW TEST:21.457 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:57:54.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-xk2s
STEP: Creating a pod to test atomic-volume-subpath
Feb 12 12:57:54.750: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xk2s" in namespace "e2e-tests-subpath-v9pqp" to be "success or failure"
Feb 12 12:57:54.766: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Pending", Reason="", readiness=false. Elapsed: 15.59335ms
Feb 12 12:57:57.236: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.485898787s
Feb 12 12:57:59.998: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Pending", Reason="", readiness=false. Elapsed: 5.247579713s
Feb 12 12:58:02.027: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Pending", Reason="", readiness=false. Elapsed: 7.276689771s
Feb 12 12:58:04.937: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.187386209s
Feb 12 12:58:06.950: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Pending", Reason="", readiness=false. Elapsed: 12.199659206s
Feb 12 12:58:09.045: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Pending", Reason="", readiness=false. Elapsed: 14.294499147s
Feb 12 12:58:11.099: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Pending", Reason="", readiness=false. Elapsed: 16.348655279s
Feb 12 12:58:13.120: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Pending", Reason="", readiness=false. Elapsed: 18.370282767s
Feb 12 12:58:15.148: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Pending", Reason="", readiness=false. Elapsed: 20.397823453s
Feb 12 12:58:17.334: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Pending", Reason="", readiness=false. Elapsed: 22.584122309s
Feb 12 12:58:19.442: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Pending", Reason="", readiness=false. Elapsed: 24.69226618s
Feb 12 12:58:21.453: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Pending", Reason="", readiness=false. Elapsed: 26.702899647s
Feb 12 12:58:23.692: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Running", Reason="", readiness=false. Elapsed: 28.941945318s
Feb 12 12:58:25.703: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Running", Reason="", readiness=false. Elapsed: 30.952893893s
Feb 12 12:58:27.719: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Running", Reason="", readiness=false. Elapsed: 32.96880663s
Feb 12 12:58:29.748: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Running", Reason="", readiness=false. Elapsed: 34.997958331s
Feb 12 12:58:31.766: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Running", Reason="", readiness=false. Elapsed: 37.015823771s
Feb 12 12:58:33.802: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Running", Reason="", readiness=false. Elapsed: 39.05151685s
Feb 12 12:58:35.833: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Running", Reason="", readiness=false. Elapsed: 41.082647349s
Feb 12 12:58:37.870: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Running", Reason="", readiness=false. Elapsed: 43.119674808s
Feb 12 12:58:39.888: INFO: Pod "pod-subpath-test-configmap-xk2s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 45.138021682s
STEP: Saw pod success
Feb 12 12:58:39.888: INFO: Pod "pod-subpath-test-configmap-xk2s" satisfied condition "success or failure"
Feb 12 12:58:39.896: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-xk2s container test-container-subpath-configmap-xk2s: 
STEP: delete the pod
Feb 12 12:58:41.136: INFO: Waiting for pod pod-subpath-test-configmap-xk2s to disappear
Feb 12 12:58:41.755: INFO: Pod pod-subpath-test-configmap-xk2s no longer exists
STEP: Deleting pod pod-subpath-test-configmap-xk2s
Feb 12 12:58:41.756: INFO: Deleting pod "pod-subpath-test-configmap-xk2s" in namespace "e2e-tests-subpath-v9pqp"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:58:41.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-v9pqp" for this suite.
Feb 12 12:58:50.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:58:50.420: INFO: namespace: e2e-tests-subpath-v9pqp, resource: bindings, ignored listing per whitelist
Feb 12 12:58:50.530: INFO: namespace e2e-tests-subpath-v9pqp deletion completed in 8.576581624s

• [SLOW TEST:56.295 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:58:50.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 12 12:58:51.283: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 12 12:58:51.292: INFO: Waiting for terminating namespaces to be deleted...
Feb 12 12:58:51.295: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 12 12:58:51.309: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 12 12:58:51.309: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 12 12:58:51.309: INFO: 	Container weave ready: true, restart count 0
Feb 12 12:58:51.309: INFO: 	Container weave-npc ready: true, restart count 0
Feb 12 12:58:51.309: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 12 12:58:51.309: INFO: 	Container coredns ready: true, restart count 0
Feb 12 12:58:51.309: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 12 12:58:51.309: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 12 12:58:51.309: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 12 12:58:51.309: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 12 12:58:51.309: INFO: 	Container coredns ready: true, restart count 0
Feb 12 12:58:51.309: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 12 12:58:51.309: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f2a9160d4d5f12], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:58:52.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-kjnnk" for this suite.
Feb 12 12:59:00.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:59:01.046: INFO: namespace: e2e-tests-sched-pred-kjnnk, resource: bindings, ignored listing per whitelist
Feb 12 12:59:01.090: INFO: namespace e2e-tests-sched-pred-kjnnk deletion completed in 8.611101618s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:10.559 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:59:01.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 12 12:59:01.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-4cb5n'
Feb 12 12:59:04.425: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 12 12:59:04.425: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Feb 12 12:59:09.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-4cb5n'
Feb 12 12:59:09.898: INFO: stderr: ""
Feb 12 12:59:09.898: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 12:59:09.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4cb5n" for this suite.
Feb 12 12:59:16.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 12:59:16.348: INFO: namespace: e2e-tests-kubectl-4cb5n, resource: bindings, ignored listing per whitelist
Feb 12 12:59:16.385: INFO: namespace e2e-tests-kubectl-4cb5n deletion completed in 6.245554177s

• [SLOW TEST:15.295 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 12:59:16.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 13:00:36.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-pdsh2" for this suite.
Feb 12 13:00:44.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:00:44.604: INFO: namespace: e2e-tests-container-runtime-pdsh2, resource: bindings, ignored listing per whitelist
Feb 12 13:00:44.908: INFO: namespace e2e-tests-container-runtime-pdsh2 deletion completed in 8.546356513s

• [SLOW TEST:88.522 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 13:00:44.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-ae8f6591-4d97-11ea-b4b9-0242ac110005
STEP: Creating secret with name s-test-opt-upd-ae8f6611-4d97-11ea-b4b9-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-ae8f6591-4d97-11ea-b4b9-0242ac110005
STEP: Updating secret s-test-opt-upd-ae8f6611-4d97-11ea-b4b9-0242ac110005
STEP: Creating secret with name s-test-opt-create-ae8f6645-4d97-11ea-b4b9-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 13:01:05.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rbs9s" for this suite.
Feb 12 13:01:31.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:01:31.892: INFO: namespace: e2e-tests-projected-rbs9s, resource: bindings, ignored listing per whitelist
Feb 12 13:01:32.029: INFO: namespace e2e-tests-projected-rbs9s deletion completed in 26.222457088s

• [SLOW TEST:47.120 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 13:01:32.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-5krtj
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-5krtj
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-5krtj
Feb 12 13:01:32.623: INFO: Found 0 stateful pods, waiting for 1
Feb 12 13:01:43.151: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Feb 12 13:01:52.643: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 12 13:01:52.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 12 13:01:53.594: INFO: stderr: "I0212 13:01:52.891606    3791 log.go:172] (0xc000138840) (0xc0006512c0) Create stream\nI0212 13:01:52.892030    3791 log.go:172] (0xc000138840) (0xc0006512c0) Stream added, broadcasting: 1\nI0212 13:01:52.897879    3791 log.go:172] (0xc000138840) Reply frame received for 1\nI0212 13:01:52.897964    3791 log.go:172] (0xc000138840) (0xc000672000) Create stream\nI0212 13:01:52.897972    3791 log.go:172] (0xc000138840) (0xc000672000) Stream added, broadcasting: 3\nI0212 13:01:52.898945    3791 log.go:172] (0xc000138840) Reply frame received for 3\nI0212 13:01:52.898993    3791 log.go:172] (0xc000138840) (0xc0007c4000) Create stream\nI0212 13:01:52.899021    3791 log.go:172] (0xc000138840) (0xc0007c4000) Stream added, broadcasting: 5\nI0212 13:01:52.899826    3791 log.go:172] (0xc000138840) Reply frame received for 5\nI0212 13:01:53.297574    3791 log.go:172] (0xc000138840) Data frame received for 3\nI0212 13:01:53.297773    3791 log.go:172] (0xc000672000) (3) Data frame handling\nI0212 13:01:53.297810    3791 log.go:172] (0xc000672000) (3) Data frame sent\nI0212 13:01:53.573339    3791 log.go:172] (0xc000138840) (0xc000672000) Stream removed, broadcasting: 3\nI0212 13:01:53.573826    3791 log.go:172] (0xc000138840) Data frame received for 1\nI0212 13:01:53.573895    3791 log.go:172] (0xc0006512c0) (1) Data frame handling\nI0212 13:01:53.573990    3791 log.go:172] (0xc0006512c0) (1) Data frame sent\nI0212 13:01:53.574023    3791 log.go:172] (0xc000138840) (0xc0006512c0) Stream removed, broadcasting: 1\nI0212 13:01:53.574204    3791 log.go:172] (0xc000138840) (0xc0007c4000) Stream removed, broadcasting: 5\nI0212 13:01:53.574286    3791 log.go:172] (0xc000138840) Go away received\nI0212 13:01:53.575248    3791 log.go:172] (0xc000138840) (0xc0006512c0) Stream removed, broadcasting: 1\nI0212 13:01:53.575418    3791 log.go:172] (0xc000138840) (0xc000672000) Stream removed, broadcasting: 3\nI0212 13:01:53.575437    3791 log.go:172] (0xc000138840) (0xc0007c4000) Stream removed, broadcasting: 5\n"
Feb 12 13:01:53.594: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 12 13:01:53.594: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 12 13:01:53.710: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 12 13:01:53.710: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 13:01:53.875: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 12 13:01:53.875: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:32 +0000 UTC  }]
Feb 12 13:01:53.875: INFO: 
Feb 12 13:01:53.875: INFO: StatefulSet ss has not reached scale 3, at 1
Feb 12 13:01:55.763: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.895075645s
Feb 12 13:01:57.303: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.007025178s
Feb 12 13:01:58.617: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.467196552s
Feb 12 13:01:59.874: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.153138336s
Feb 12 13:02:01.059: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.896281335s
Feb 12 13:02:02.088: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.710865534s
Feb 12 13:02:03.103: INFO: Verifying statefulset ss doesn't scale past 3 for another 682.541628ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-5krtj
Feb 12 13:02:05.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:02:07.489: INFO: stderr: "I0212 13:02:06.891557    3814 log.go:172] (0xc0001506e0) (0xc000619220) Create stream\nI0212 13:02:06.892597    3814 log.go:172] (0xc0001506e0) (0xc000619220) Stream added, broadcasting: 1\nI0212 13:02:06.907086    3814 log.go:172] (0xc0001506e0) Reply frame received for 1\nI0212 13:02:06.907221    3814 log.go:172] (0xc0001506e0) (0xc00074a000) Create stream\nI0212 13:02:06.907324    3814 log.go:172] (0xc0001506e0) (0xc00074a000) Stream added, broadcasting: 3\nI0212 13:02:06.909395    3814 log.go:172] (0xc0001506e0) Reply frame received for 3\nI0212 13:02:06.909564    3814 log.go:172] (0xc0001506e0) (0xc000288000) Create stream\nI0212 13:02:06.909596    3814 log.go:172] (0xc0001506e0) (0xc000288000) Stream added, broadcasting: 5\nI0212 13:02:06.911653    3814 log.go:172] (0xc0001506e0) Reply frame received for 5\nI0212 13:02:07.286824    3814 log.go:172] (0xc0001506e0) Data frame received for 3\nI0212 13:02:07.286896    3814 log.go:172] (0xc00074a000) (3) Data frame handling\nI0212 13:02:07.286929    3814 log.go:172] (0xc00074a000) (3) Data frame sent\nI0212 13:02:07.467281    3814 log.go:172] (0xc0001506e0) (0xc000288000) Stream removed, broadcasting: 5\nI0212 13:02:07.467590    3814 log.go:172] (0xc0001506e0) (0xc00074a000) Stream removed, broadcasting: 3\nI0212 13:02:07.467669    3814 log.go:172] (0xc0001506e0) Data frame received for 1\nI0212 13:02:07.467687    3814 log.go:172] (0xc000619220) (1) Data frame handling\nI0212 13:02:07.467716    3814 log.go:172] (0xc000619220) (1) Data frame sent\nI0212 13:02:07.467737    3814 log.go:172] (0xc0001506e0) (0xc000619220) Stream removed, broadcasting: 1\nI0212 13:02:07.467759    3814 log.go:172] (0xc0001506e0) Go away received\nI0212 13:02:07.469491    3814 log.go:172] (0xc0001506e0) (0xc000619220) Stream removed, broadcasting: 1\nI0212 13:02:07.469656    3814 log.go:172] (0xc0001506e0) (0xc00074a000) Stream removed, broadcasting: 3\nI0212 13:02:07.469676    3814 log.go:172] (0xc0001506e0) (0xc000288000) Stream removed, broadcasting: 5\n"
Feb 12 13:02:07.489: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 12 13:02:07.489: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 12 13:02:07.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:02:09.091: INFO: rc: 1
Feb 12 13:02:09.091: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00231a180 exit status 1   true [0xc0000e82f8 0xc000c50010 0xc000c50028] [0xc0000e82f8 0xc000c50010 0xc000c50028] [0xc000c50008 0xc000c50020] [0x935700 0x935700] 0xc0020a0660 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb 12 13:02:19.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:02:19.754: INFO: stderr: "I0212 13:02:19.366193    3857 log.go:172] (0xc0006d42c0) (0xc0006f4640) Create stream\nI0212 13:02:19.366841    3857 log.go:172] (0xc0006d42c0) (0xc0006f4640) Stream added, broadcasting: 1\nI0212 13:02:19.376889    3857 log.go:172] (0xc0006d42c0) Reply frame received for 1\nI0212 13:02:19.377060    3857 log.go:172] (0xc0006d42c0) (0xc0006f46e0) Create stream\nI0212 13:02:19.377090    3857 log.go:172] (0xc0006d42c0) (0xc0006f46e0) Stream added, broadcasting: 3\nI0212 13:02:19.379273    3857 log.go:172] (0xc0006d42c0) Reply frame received for 3\nI0212 13:02:19.379440    3857 log.go:172] (0xc0006d42c0) (0xc0005eaf00) Create stream\nI0212 13:02:19.379463    3857 log.go:172] (0xc0006d42c0) (0xc0005eaf00) Stream added, broadcasting: 5\nI0212 13:02:19.380837    3857 log.go:172] (0xc0006d42c0) Reply frame received for 5\nI0212 13:02:19.601073    3857 log.go:172] (0xc0006d42c0) Data frame received for 3\nI0212 13:02:19.601238    3857 log.go:172] (0xc0006f46e0) (3) Data frame handling\nI0212 13:02:19.601274    3857 log.go:172] (0xc0006f46e0) (3) Data frame sent\nI0212 13:02:19.601335    3857 log.go:172] (0xc0006d42c0) Data frame received for 5\nI0212 13:02:19.601391    3857 log.go:172] (0xc0005eaf00) (5) Data frame handling\nI0212 13:02:19.601410    3857 log.go:172] (0xc0005eaf00) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0212 13:02:19.736918    3857 log.go:172] (0xc0006d42c0) Data frame received for 1\nI0212 13:02:19.737121    3857 log.go:172] (0xc0006d42c0) (0xc0006f46e0) Stream removed, broadcasting: 3\nI0212 13:02:19.737294    3857 log.go:172] (0xc0006f4640) (1) Data frame handling\nI0212 13:02:19.737338    3857 log.go:172] (0xc0006f4640) (1) Data frame sent\nI0212 13:02:19.737440    3857 log.go:172] (0xc0006d42c0) (0xc0005eaf00) Stream removed, broadcasting: 5\nI0212 13:02:19.737499    3857 log.go:172] (0xc0006d42c0) (0xc0006f4640) Stream removed, broadcasting: 1\nI0212 13:02:19.738309    3857 log.go:172] (0xc0006d42c0) (0xc0006f4640) Stream removed, broadcasting: 1\nI0212 13:02:19.738333    3857 log.go:172] (0xc0006d42c0) (0xc0006f46e0) Stream removed, broadcasting: 3\nI0212 13:02:19.738343    3857 log.go:172] (0xc0006d42c0) (0xc0005eaf00) Stream removed, broadcasting: 5\n"
Feb 12 13:02:19.754: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 12 13:02:19.754: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 12 13:02:19.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:02:20.208: INFO: stderr: "I0212 13:02:19.948734    3878 log.go:172] (0xc0006ee370) (0xc000712640) Create stream\nI0212 13:02:19.949011    3878 log.go:172] (0xc0006ee370) (0xc000712640) Stream added, broadcasting: 1\nI0212 13:02:19.954282    3878 log.go:172] (0xc0006ee370) Reply frame received for 1\nI0212 13:02:19.954339    3878 log.go:172] (0xc0006ee370) (0xc000650d20) Create stream\nI0212 13:02:19.954350    3878 log.go:172] (0xc0006ee370) (0xc000650d20) Stream added, broadcasting: 3\nI0212 13:02:19.955811    3878 log.go:172] (0xc0006ee370) Reply frame received for 3\nI0212 13:02:19.955876    3878 log.go:172] (0xc0006ee370) (0xc00074e000) Create stream\nI0212 13:02:19.955913    3878 log.go:172] (0xc0006ee370) (0xc00074e000) Stream added, broadcasting: 5\nI0212 13:02:19.957025    3878 log.go:172] (0xc0006ee370) Reply frame received for 5\nI0212 13:02:20.063855    3878 log.go:172] (0xc0006ee370) Data frame received for 5\nI0212 13:02:20.064010    3878 log.go:172] (0xc00074e000) (5) Data frame handling\nI0212 13:02:20.064027    3878 log.go:172] (0xc00074e000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0212 13:02:20.064055    3878 log.go:172] (0xc0006ee370) Data frame received for 3\nI0212 13:02:20.064059    3878 log.go:172] (0xc000650d20) (3) Data frame handling\nI0212 13:02:20.064067    3878 log.go:172] (0xc000650d20) (3) Data frame sent\nI0212 13:02:20.194516    3878 log.go:172] (0xc0006ee370) Data frame received for 1\nI0212 13:02:20.194882    3878 log.go:172] (0xc0006ee370) (0xc000650d20) Stream removed, broadcasting: 3\nI0212 13:02:20.194947    3878 log.go:172] (0xc000712640) (1) Data frame handling\nI0212 13:02:20.194987    3878 log.go:172] (0xc000712640) (1) Data frame sent\nI0212 13:02:20.195005    3878 log.go:172] (0xc0006ee370) (0xc00074e000) Stream removed, broadcasting: 5\nI0212 13:02:20.195044    3878 log.go:172] (0xc0006ee370) (0xc000712640) Stream removed, broadcasting: 1\nI0212 13:02:20.195075    3878 log.go:172] (0xc0006ee370) Go away received\nI0212 13:02:20.195659    3878 log.go:172] (0xc0006ee370) (0xc000712640) Stream removed, broadcasting: 1\nI0212 13:02:20.195681    3878 log.go:172] (0xc0006ee370) (0xc000650d20) Stream removed, broadcasting: 3\nI0212 13:02:20.195711    3878 log.go:172] (0xc0006ee370) (0xc00074e000) Stream removed, broadcasting: 5\n"
Feb 12 13:02:20.209: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 12 13:02:20.209: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 12 13:02:20.221: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 13:02:20.221: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 13:02:20.221: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 12 13:02:20.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 12 13:02:20.905: INFO: stderr: "I0212 13:02:20.416168    3898 log.go:172] (0xc00014c790) (0xc000779360) Create stream\nI0212 13:02:20.416514    3898 log.go:172] (0xc00014c790) (0xc000779360) Stream added, broadcasting: 1\nI0212 13:02:20.420088    3898 log.go:172] (0xc00014c790) Reply frame received for 1\nI0212 13:02:20.420130    3898 log.go:172] (0xc00014c790) (0xc0006d0000) Create stream\nI0212 13:02:20.420138    3898 log.go:172] (0xc00014c790) (0xc0006d0000) Stream added, broadcasting: 3\nI0212 13:02:20.421049    3898 log.go:172] (0xc00014c790) Reply frame received for 3\nI0212 13:02:20.421071    3898 log.go:172] (0xc00014c790) (0xc0006de000) Create stream\nI0212 13:02:20.421079    3898 log.go:172] (0xc00014c790) (0xc0006de000) Stream added, broadcasting: 5\nI0212 13:02:20.421800    3898 log.go:172] (0xc00014c790) Reply frame received for 5\nI0212 13:02:20.731101    3898 log.go:172] (0xc00014c790) Data frame received for 3\nI0212 13:02:20.731283    3898 log.go:172] (0xc0006d0000) (3) Data frame handling\nI0212 13:02:20.731306    3898 log.go:172] (0xc0006d0000) (3) Data frame sent\nI0212 13:02:20.887794    3898 log.go:172] (0xc00014c790) Data frame received for 1\nI0212 13:02:20.888756    3898 log.go:172] (0xc00014c790) (0xc0006d0000) Stream removed, broadcasting: 3\nI0212 13:02:20.889005    3898 log.go:172] (0xc000779360) (1) Data frame handling\nI0212 13:02:20.889034    3898 log.go:172] (0xc000779360) (1) Data frame sent\nI0212 13:02:20.889054    3898 log.go:172] (0xc00014c790) (0xc000779360) Stream removed, broadcasting: 1\nI0212 13:02:20.890083    3898 log.go:172] (0xc00014c790) (0xc0006de000) Stream removed, broadcasting: 5\nI0212 13:02:20.890172    3898 log.go:172] (0xc00014c790) (0xc000779360) Stream removed, broadcasting: 1\nI0212 13:02:20.890185    3898 log.go:172] (0xc00014c790) (0xc0006d0000) Stream removed, broadcasting: 3\nI0212 13:02:20.890200    3898 log.go:172] (0xc00014c790) (0xc0006de000) Stream removed, broadcasting: 5\nI0212 13:02:20.890931    3898 log.go:172] (0xc00014c790) Go away received\n"
Feb 12 13:02:20.906: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 12 13:02:20.906: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 12 13:02:20.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 12 13:02:21.480: INFO: stderr: "I0212 13:02:21.194740    3919 log.go:172] (0xc000782160) (0xc0005b6000) Create stream\nI0212 13:02:21.195097    3919 log.go:172] (0xc000782160) (0xc0005b6000) Stream added, broadcasting: 1\nI0212 13:02:21.201470    3919 log.go:172] (0xc000782160) Reply frame received for 1\nI0212 13:02:21.201577    3919 log.go:172] (0xc000782160) (0xc000368be0) Create stream\nI0212 13:02:21.201587    3919 log.go:172] (0xc000782160) (0xc000368be0) Stream added, broadcasting: 3\nI0212 13:02:21.202698    3919 log.go:172] (0xc000782160) Reply frame received for 3\nI0212 13:02:21.202722    3919 log.go:172] (0xc000782160) (0xc0005b6140) Create stream\nI0212 13:02:21.202730    3919 log.go:172] (0xc000782160) (0xc0005b6140) Stream added, broadcasting: 5\nI0212 13:02:21.203748    3919 log.go:172] (0xc000782160) Reply frame received for 5\nI0212 13:02:21.333877    3919 log.go:172] (0xc000782160) Data frame received for 3\nI0212 13:02:21.334127    3919 log.go:172] (0xc000368be0) (3) Data frame handling\nI0212 13:02:21.334166    3919 log.go:172] (0xc000368be0) (3) Data frame sent\nI0212 13:02:21.462834    3919 log.go:172] (0xc000782160) (0xc000368be0) Stream removed, broadcasting: 3\nI0212 13:02:21.463146    3919 log.go:172] (0xc000782160) Data frame received for 1\nI0212 13:02:21.463189    3919 log.go:172] (0xc0005b6000) (1) Data frame handling\nI0212 13:02:21.463243    3919 log.go:172] (0xc000782160) (0xc0005b6140) Stream removed, broadcasting: 5\nI0212 13:02:21.463529    3919 log.go:172] (0xc0005b6000) (1) Data frame sent\nI0212 13:02:21.463827    3919 log.go:172] (0xc000782160) (0xc0005b6000) Stream removed, broadcasting: 1\nI0212 13:02:21.463930    3919 log.go:172] (0xc000782160) Go away received\nI0212 13:02:21.464884    3919 log.go:172] (0xc000782160) (0xc0005b6000) Stream removed, broadcasting: 1\nI0212 13:02:21.464933    3919 log.go:172] (0xc000782160) (0xc000368be0) Stream removed, broadcasting: 3\nI0212 13:02:21.464958    3919 log.go:172] (0xc000782160) (0xc0005b6140) Stream removed, broadcasting: 5\n"
Feb 12 13:02:21.480: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 12 13:02:21.480: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 12 13:02:21.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 12 13:02:22.166: INFO: stderr: "I0212 13:02:21.809281    3941 log.go:172] (0xc000710370) (0xc000730640) Create stream\nI0212 13:02:21.809595    3941 log.go:172] (0xc000710370) (0xc000730640) Stream added, broadcasting: 1\nI0212 13:02:21.815584    3941 log.go:172] (0xc000710370) Reply frame received for 1\nI0212 13:02:21.815643    3941 log.go:172] (0xc000710370) (0xc00065ce60) Create stream\nI0212 13:02:21.815649    3941 log.go:172] (0xc000710370) (0xc00065ce60) Stream added, broadcasting: 3\nI0212 13:02:21.818696    3941 log.go:172] (0xc000710370) Reply frame received for 3\nI0212 13:02:21.818727    3941 log.go:172] (0xc000710370) (0xc000326000) Create stream\nI0212 13:02:21.818737    3941 log.go:172] (0xc000710370) (0xc000326000) Stream added, broadcasting: 5\nI0212 13:02:21.823878    3941 log.go:172] (0xc000710370) Reply frame received for 5\nI0212 13:02:22.024991    3941 log.go:172] (0xc000710370) Data frame received for 3\nI0212 13:02:22.025125    3941 log.go:172] (0xc00065ce60) (3) Data frame handling\nI0212 13:02:22.025152    3941 log.go:172] (0xc00065ce60) (3) Data frame sent\nI0212 13:02:22.148311    3941 log.go:172] (0xc000710370) Data frame received for 1\nI0212 13:02:22.148418    3941 log.go:172] (0xc000730640) (1) Data frame handling\nI0212 13:02:22.148437    3941 log.go:172] (0xc000730640) (1) Data frame sent\nI0212 13:02:22.148465    3941 log.go:172] (0xc000710370) (0xc000730640) Stream removed, broadcasting: 1\nI0212 13:02:22.149423    3941 log.go:172] (0xc000710370) (0xc00065ce60) Stream removed, broadcasting: 3\nI0212 13:02:22.151645    3941 log.go:172] (0xc000710370) (0xc000326000) Stream removed, broadcasting: 5\nI0212 13:02:22.151734    3941 log.go:172] (0xc000710370) (0xc000730640) Stream removed, broadcasting: 1\nI0212 13:02:22.151744    3941 log.go:172] (0xc000710370) (0xc00065ce60) Stream removed, broadcasting: 3\nI0212 13:02:22.151747    3941 log.go:172] (0xc000710370) (0xc000326000) Stream removed, broadcasting: 5\n"
Feb 12 13:02:22.166: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 12 13:02:22.166: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 12 13:02:22.166: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 13:02:22.191: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb 12 13:02:32.250: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 12 13:02:32.250: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 12 13:02:32.250: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 12 13:02:32.304: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 12 13:02:32.304: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:32 +0000 UTC  }]
Feb 12 13:02:32.304: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:53 +0000 UTC  }]
Feb 12 13:02:32.304: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:53 +0000 UTC  }]
Feb 12 13:02:32.304: INFO: 
Feb 12 13:02:32.304: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 12 13:02:33.341: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 12 13:02:33.341: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:32 +0000 UTC  }]
Feb 12 13:02:33.341: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:53 +0000 UTC  }]
Feb 12 13:02:33.341: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:53 +0000 UTC  }]
Feb 12 13:02:33.341: INFO: 
Feb 12 13:02:33.341: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 12 13:02:34.599: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 12 13:02:34.599: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:32 +0000 UTC  }]
Feb 12 13:02:34.599: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:53 +0000 UTC  }]
Feb 12 13:02:34.599: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:53 +0000 UTC  }]
Feb 12 13:02:34.599: INFO: 
Feb 12 13:02:34.599: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 12 13:02:35.623: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 12 13:02:35.623: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:32 +0000 UTC  }]
Feb 12 13:02:35.623: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:53 +0000 UTC  }]
Feb 12 13:02:35.623: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:53 +0000 UTC  }]
Feb 12 13:02:35.623: INFO: 
Feb 12 13:02:35.623: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 12 13:02:36.651: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 12 13:02:36.651: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:32 +0000 UTC  }]
Feb 12 13:02:36.651: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:53 +0000 UTC  }]
Feb 12 13:02:36.651: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:53 +0000 UTC  }]
Feb 12 13:02:36.651: INFO: 
Feb 12 13:02:36.651: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 12 13:02:38.059: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 12 13:02:38.059: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:32 +0000 UTC  }]
Feb 12 13:02:38.059: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:53 +0000 UTC  }]
Feb 12 13:02:38.059: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:53 +0000 UTC  }]
Feb 12 13:02:38.059: INFO: 
Feb 12 13:02:38.059: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 12 13:02:39.080: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 12 13:02:39.080: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:32 +0000 UTC  }]
Feb 12 13:02:39.080: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:53 +0000 UTC  }]
Feb 12 13:02:39.081: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:53 +0000 UTC  }]
Feb 12 13:02:39.081: INFO: 
Feb 12 13:02:39.081: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 12 13:02:40.311: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 12 13:02:40.312: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:32 +0000 UTC  }]
Feb 12 13:02:40.312: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:53 +0000 UTC  }]
Feb 12 13:02:40.312: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:53 +0000 UTC  }]
Feb 12 13:02:40.312: INFO: 
Feb 12 13:02:40.312: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 12 13:02:41.318: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 12 13:02:41.318: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:02:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:01:53 +0000 UTC  }]
Feb 12 13:02:41.318: INFO: 
Feb 12 13:02:41.318: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-5krtj
Feb 12 13:02:42.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:02:42.680: INFO: rc: 1
Feb 12 13:02:42.680: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc000eea150 exit status 1   true [0xc0013ce1b8 0xc0013ce1d0 0xc0013ce1e8] [0xc0013ce1b8 0xc0013ce1d0 0xc0013ce1e8] [0xc0013ce1c8 0xc0013ce1e0] [0x935700 0x935700] 0xc001a70ae0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb 12 13:02:52.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:02:52.834: INFO: rc: 1
Feb 12 13:02:52.835: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000eea2a0 exit status 1   true [0xc0013ce1f0 0xc0013ce208 0xc0013ce220] [0xc0013ce1f0 0xc0013ce208 0xc0013ce220] [0xc0013ce200 0xc0013ce218] [0x935700 0x935700] 0xc001a70f60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:03:02.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:03:02.982: INFO: rc: 1
Feb 12 13:03:02.982: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001c374a0 exit status 1   true [0xc0020600b0 0xc0020600c8 0xc0020600e0] [0xc0020600b0 0xc0020600c8 0xc0020600e0] [0xc0020600c0 0xc0020600d8] [0x935700 0x935700] 0xc001a8b620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:03:12.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:03:13.161: INFO: rc: 1
Feb 12 13:03:13.161: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001c37620 exit status 1   true [0xc0020600e8 0xc002060100 0xc002060118] [0xc0020600e8 0xc002060100 0xc002060118] [0xc0020600f8 0xc002060110] [0x935700 0x935700] 0xc001a8b9e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:03:23.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:03:23.350: INFO: rc: 1
Feb 12 13:03:23.350: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001c37770 exit status 1   true [0xc002060120 0xc002060138 0xc002060150] [0xc002060120 0xc002060138 0xc002060150] [0xc002060130 0xc002060148] [0x935700 0x935700] 0xc000d165a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:03:33.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:03:33.538: INFO: rc: 1
Feb 12 13:03:33.538: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00253df80 exit status 1   true [0xc000f2c270 0xc000f2c288 0xc000f2c2a0] [0xc000f2c270 0xc000f2c288 0xc000f2c2a0] [0xc000f2c280 0xc000f2c298] [0x935700 0x935700] 0xc000d12300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:03:43.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:03:43.700: INFO: rc: 1
Feb 12 13:03:43.701: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0016ec0f0 exit status 1   true [0xc000f2c2a8 0xc000f2c2c0 0xc000f2c2d8] [0xc000f2c2a8 0xc000f2c2c0 0xc000f2c2d8] [0xc000f2c2b8 0xc000f2c2d0] [0x935700 0x935700] 0xc000d12960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:03:53.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:03:53.912: INFO: rc: 1
Feb 12 13:03:53.913: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0016ec210 exit status 1   true [0xc000f2c2e0 0xc000f2c2f8 0xc000f2c320] [0xc000f2c2e0 0xc000f2c2f8 0xc000f2c320] [0xc000f2c2f0 0xc000f2c308] [0x935700 0x935700] 0xc000d12d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:04:03.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:04:04.145: INFO: rc: 1
Feb 12 13:04:04.146: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000eea450 exit status 1   true [0xc0013ce228 0xc0013ce240 0xc0013ce258] [0xc0013ce228 0xc0013ce240 0xc0013ce258] [0xc0013ce238 0xc0013ce250] [0x935700 0x935700] 0xc001a712c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:04:14.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:04:14.282: INFO: rc: 1
Feb 12 13:04:14.283: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0018b6150 exit status 1   true [0xc000f2c000 0xc000f2c018 0xc000f2c030] [0xc000f2c000 0xc000f2c018 0xc000f2c030] [0xc000f2c010 0xc000f2c028] [0x935700 0x935700] 0xc001188780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:04:24.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:04:24.409: INFO: rc: 1
Feb 12 13:04:24.409: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00253c180 exit status 1   true [0xc002060000 0xc002060018 0xc002060030] [0xc002060000 0xc002060018 0xc002060030] [0xc002060010 0xc002060028] [0x935700 0x935700] 0xc001a8a960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:04:34.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:04:34.622: INFO: rc: 1
Feb 12 13:04:34.623: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0025742a0 exit status 1   true [0xc0013ce000 0xc0013ce018 0xc0013ce030] [0xc0013ce000 0xc0013ce018 0xc0013ce030] [0xc0013ce010 0xc0013ce028] [0x935700 0x935700] 0xc001e6c7e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:04:44.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:04:44.724: INFO: rc: 1
Feb 12 13:04:44.725: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0025744b0 exit status 1   true [0xc0013ce038 0xc0013ce050 0xc0013ce068] [0xc0013ce038 0xc0013ce050 0xc0013ce068] [0xc0013ce048 0xc0013ce060] [0x935700 0x935700] 0xc001e6cae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:04:54.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:04:54.920: INFO: rc: 1
Feb 12 13:04:54.920: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0018b63c0 exit status 1   true [0xc000f2c038 0xc000f2c050 0xc000f2c068] [0xc000f2c038 0xc000f2c050 0xc000f2c068] [0xc000f2c048 0xc000f2c060] [0x935700 0x935700] 0xc001188a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:05:04.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:05:05.063: INFO: rc: 1
Feb 12 13:05:05.063: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00253c420 exit status 1   true [0xc002060038 0xc002060050 0xc002060068] [0xc002060038 0xc002060050 0xc002060068] [0xc002060048 0xc002060060] [0x935700 0x935700] 0xc001a8b080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:05:15.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:05:15.242: INFO: rc: 1
Feb 12 13:05:15.242: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0018b6510 exit status 1   true [0xc000f2c070 0xc000f2c0a8 0xc000f2c0f0] [0xc000f2c070 0xc000f2c0a8 0xc000f2c0f0] [0xc000f2c088 0xc000f2c0d8] [0x935700 0x935700] 0xc001188d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:05:25.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:05:25.430: INFO: rc: 1
Feb 12 13:05:25.430: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0025745d0 exit status 1   true [0xc0013ce070 0xc0013ce088 0xc0013ce0a8] [0xc0013ce070 0xc0013ce088 0xc0013ce0a8] [0xc0013ce080 0xc0013ce098] [0x935700 0x935700] 0xc001e6cf00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:05:35.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:05:35.608: INFO: rc: 1
Feb 12 13:05:35.608: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0025746f0 exit status 1   true [0xc0013ce0b0 0xc0013ce0c8 0xc0013ce0e0] [0xc0013ce0b0 0xc0013ce0c8 0xc0013ce0e0] [0xc0013ce0c0 0xc0013ce0d8] [0x935700 0x935700] 0xc001e6d260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:05:45.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:05:45.757: INFO: rc: 1
Feb 12 13:05:45.757: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0018b6630 exit status 1   true [0xc000f2c108 0xc000f2c120 0xc000f2c138] [0xc000f2c108 0xc000f2c120 0xc000f2c138] [0xc000f2c118 0xc000f2c130] [0x935700 0x935700] 0xc001188fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:05:55.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:05:55.892: INFO: rc: 1
Feb 12 13:05:55.892: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0016ec150 exit status 1   true [0xc000c50000 0xc000c50018 0xc000c50058] [0xc000c50000 0xc000c50018 0xc000c50058] [0xc000c50010 0xc000c50028] [0x935700 0x935700] 0xc001ec1800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:06:05.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:06:06.048: INFO: rc: 1
Feb 12 13:06:06.048: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0016ec2a0 exit status 1   true [0xc000c50080 0xc000c500f8 0xc000c50120] [0xc000c50080 0xc000c500f8 0xc000c50120] [0xc000c500d0 0xc000c50118] [0x935700 0x935700] 0xc001d2f8c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:06:16.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:06:16.208: INFO: rc: 1
Feb 12 13:06:16.208: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0018b6180 exit status 1   true [0xc000f2c000 0xc000f2c018 0xc000f2c030] [0xc000f2c000 0xc000f2c018 0xc000f2c030] [0xc000f2c010 0xc000f2c028] [0x935700 0x935700] 0xc001d2f8c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:06:26.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:06:26.478: INFO: rc: 1
Feb 12 13:06:26.478: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002574270 exit status 1   true [0xc000c50000 0xc000c50018 0xc000c50058] [0xc000c50000 0xc000c50018 0xc000c50058] [0xc000c50010 0xc000c50028] [0x935700 0x935700] 0xc001ec1800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:06:36.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:06:36.623: INFO: rc: 1
Feb 12 13:06:36.623: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0018b63f0 exit status 1   true [0xc000f2c038 0xc000f2c050 0xc000f2c068] [0xc000f2c038 0xc000f2c050 0xc000f2c068] [0xc000f2c048 0xc000f2c060] [0x935700 0x935700] 0xc001188780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:06:46.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:06:46.824: INFO: rc: 1
Feb 12 13:06:46.824: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0016ec120 exit status 1   true [0xc0013ce000 0xc0013ce018 0xc0013ce030] [0xc0013ce000 0xc0013ce018 0xc0013ce030] [0xc0013ce010 0xc0013ce028] [0x935700 0x935700] 0xc001e6c7e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:06:56.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:06:57.024: INFO: rc: 1
Feb 12 13:06:57.024: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0018b6540 exit status 1   true [0xc000f2c070 0xc000f2c0a8 0xc000f2c0f0] [0xc000f2c070 0xc000f2c0a8 0xc000f2c0f0] [0xc000f2c088 0xc000f2c0d8] [0x935700 0x935700] 0xc001188a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:07:07.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:07:07.246: INFO: rc: 1
Feb 12 13:07:07.247: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0016ec2d0 exit status 1   true [0xc0013ce038 0xc0013ce050 0xc0013ce068] [0xc0013ce038 0xc0013ce050 0xc0013ce068] [0xc0013ce048 0xc0013ce060] [0x935700 0x935700] 0xc001e6cae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:07:17.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:07:17.451: INFO: rc: 1
Feb 12 13:07:17.451: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00253c1e0 exit status 1   true [0xc002060000 0xc002060018 0xc002060030] [0xc002060000 0xc002060018 0xc002060030] [0xc002060010 0xc002060028] [0x935700 0x935700] 0xc001a8a960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:07:27.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:07:27.637: INFO: rc: 1
Feb 12 13:07:27.637: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0016ec480 exit status 1   true [0xc0013ce070 0xc0013ce088 0xc0013ce0a8] [0xc0013ce070 0xc0013ce088 0xc0013ce0a8] [0xc0013ce080 0xc0013ce098] [0x935700 0x935700] 0xc001e6cf00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:07:37.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:07:37.806: INFO: rc: 1
Feb 12 13:07:37.807: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0018b6690 exit status 1   true [0xc000f2c108 0xc000f2c120 0xc000f2c138] [0xc000f2c108 0xc000f2c120 0xc000f2c138] [0xc000f2c118 0xc000f2c130] [0x935700 0x935700] 0xc001188d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 12 13:07:47.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-5krtj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 12 13:07:47.951: INFO: rc: 1
Feb 12 13:07:47.951: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: 
Feb 12 13:07:47.951: INFO: Scaling statefulset ss to 0
Feb 12 13:07:47.996: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 12 13:07:48.001: INFO: Deleting all statefulset in ns e2e-tests-statefulset-5krtj
Feb 12 13:07:48.004: INFO: Scaling statefulset ss to 0
Feb 12 13:07:48.014: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 13:07:48.017: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 13:07:48.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-5krtj" for this suite.
Feb 12 13:07:56.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:07:56.270: INFO: namespace: e2e-tests-statefulset-5krtj, resource: bindings, ignored listing per whitelist
Feb 12 13:07:56.404: INFO: namespace e2e-tests-statefulset-5krtj deletion completed in 8.314498924s

• [SLOW TEST:384.374 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 13:07:56.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 12 13:08:08.929: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-afd41735-4d98-11ea-b4b9-0242ac110005,GenerateName:,Namespace:e2e-tests-events-xlxxm,SelfLink:/api/v1/namespaces/e2e-tests-events-xlxxm/pods/send-events-afd41735-4d98-11ea-b4b9-0242ac110005,UID:afd5fcf3-4d98-11ea-a994-fa163e34d433,ResourceVersion:21427436,Generation:0,CreationTimestamp:2020-02-12 13:07:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 850164336,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wb7f8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb7f8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-wb7f8 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001274da0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001274dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:07:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:08:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:08:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:07:56 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-12 13:07:57 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-12 13:08:06 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://4f9b7a138e3ba5fc458935434010c3c7ee71c7b2b3eebb3d29bb5f0d17c58c44}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb 12 13:08:10.949: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 12 13:08:12.990: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 13:08:13.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-xlxxm" for this suite.
Feb 12 13:08:53.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:08:53.294: INFO: namespace: e2e-tests-events-xlxxm, resource: bindings, ignored listing per whitelist
Feb 12 13:08:53.432: INFO: namespace e2e-tests-events-xlxxm deletion completed in 40.320448103s

• [SLOW TEST:57.028 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 13:08:53.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-d1a93780-4d98-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb 12 13:08:53.761: INFO: Waiting up to 5m0s for pod "pod-configmaps-d1ad170b-4d98-11ea-b4b9-0242ac110005" in namespace "e2e-tests-configmap-cgq9k" to be "success or failure"
Feb 12 13:08:53.822: INFO: Pod "pod-configmaps-d1ad170b-4d98-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 61.412719ms
Feb 12 13:08:55.914: INFO: Pod "pod-configmaps-d1ad170b-4d98-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15312592s
Feb 12 13:08:58.047: INFO: Pod "pod-configmaps-d1ad170b-4d98-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.28652282s
Feb 12 13:09:00.072: INFO: Pod "pod-configmaps-d1ad170b-4d98-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.310878759s
Feb 12 13:09:02.084: INFO: Pod "pod-configmaps-d1ad170b-4d98-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.322977852s
Feb 12 13:09:04.417: INFO: Pod "pod-configmaps-d1ad170b-4d98-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.656710976s
STEP: Saw pod success
Feb 12 13:09:04.418: INFO: Pod "pod-configmaps-d1ad170b-4d98-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 13:09:04.430: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d1ad170b-4d98-11ea-b4b9-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb 12 13:09:04.909: INFO: Waiting for pod pod-configmaps-d1ad170b-4d98-11ea-b4b9-0242ac110005 to disappear
Feb 12 13:09:04.930: INFO: Pod pod-configmaps-d1ad170b-4d98-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 13:09:04.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-cgq9k" for this suite.
Feb 12 13:09:10.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:09:11.097: INFO: namespace: e2e-tests-configmap-cgq9k, resource: bindings, ignored listing per whitelist
Feb 12 13:09:11.143: INFO: namespace e2e-tests-configmap-cgq9k deletion completed in 6.201178098s

• [SLOW TEST:17.710 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 13:09:11.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 12 13:09:11.520: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc50a9fc-4d98-11ea-b4b9-0242ac110005" in namespace "e2e-tests-projected-tmdjx" to be "success or failure"
Feb 12 13:09:11.557: INFO: Pod "downwardapi-volume-dc50a9fc-4d98-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 36.311783ms
Feb 12 13:09:13.901: INFO: Pod "downwardapi-volume-dc50a9fc-4d98-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380538943s
Feb 12 13:09:15.914: INFO: Pod "downwardapi-volume-dc50a9fc-4d98-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.393434401s
Feb 12 13:09:18.145: INFO: Pod "downwardapi-volume-dc50a9fc-4d98-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.623721001s
Feb 12 13:09:20.173: INFO: Pod "downwardapi-volume-dc50a9fc-4d98-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.652129306s
Feb 12 13:09:22.184: INFO: Pod "downwardapi-volume-dc50a9fc-4d98-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.663208274s
STEP: Saw pod success
Feb 12 13:09:22.184: INFO: Pod "downwardapi-volume-dc50a9fc-4d98-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 13:09:22.190: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-dc50a9fc-4d98-11ea-b4b9-0242ac110005 container client-container: 
STEP: delete the pod
Feb 12 13:09:23.411: INFO: Waiting for pod downwardapi-volume-dc50a9fc-4d98-11ea-b4b9-0242ac110005 to disappear
Feb 12 13:09:23.438: INFO: Pod downwardapi-volume-dc50a9fc-4d98-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 13:09:23.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tmdjx" for this suite.
Feb 12 13:09:29.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:09:29.837: INFO: namespace: e2e-tests-projected-tmdjx, resource: bindings, ignored listing per whitelist
Feb 12 13:09:29.870: INFO: namespace e2e-tests-projected-tmdjx deletion completed in 6.267932697s

• [SLOW TEST:18.727 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 13:09:29.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 12 13:10:04.245: INFO: Container started at 2020-02-12 13:09:40 +0000 UTC, pod became ready at 2020-02-12 13:10:03 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 13:10:04.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-4v7jv" for this suite.
Feb 12 13:10:30.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:10:30.634: INFO: namespace: e2e-tests-container-probe-4v7jv, resource: bindings, ignored listing per whitelist
Feb 12 13:10:30.682: INFO: namespace e2e-tests-container-probe-4v7jv deletion completed in 26.397665712s

• [SLOW TEST:60.811 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 13:10:30.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 12 13:10:30.948: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 13:10:52.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-k966t" for this suite.
Feb 12 13:11:01.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:11:01.158: INFO: namespace: e2e-tests-init-container-k966t, resource: bindings, ignored listing per whitelist
Feb 12 13:11:01.247: INFO: namespace e2e-tests-init-container-k966t deletion completed in 8.63709224s

• [SLOW TEST:30.564 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 13:11:01.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 12 13:11:01.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 12 13:11:01.627: INFO: stderr: ""
Feb 12 13:11:01.627: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 13:11:01.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-67pjh" for this suite.
Feb 12 13:11:07.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:11:07.864: INFO: namespace: e2e-tests-kubectl-67pjh, resource: bindings, ignored listing per whitelist
Feb 12 13:11:08.229: INFO: namespace e2e-tests-kubectl-67pjh deletion completed in 6.583394615s

• [SLOW TEST:6.982 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 13:11:08.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Feb 12 13:11:08.424: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix275641093/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 13:11:08.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-r5g9x" for this suite.
Feb 12 13:11:14.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:11:14.927: INFO: namespace: e2e-tests-kubectl-r5g9x, resource: bindings, ignored listing per whitelist
Feb 12 13:11:14.953: INFO: namespace e2e-tests-kubectl-r5g9x deletion completed in 6.358808032s

• [SLOW TEST:6.724 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 13:11:14.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-qz5fv
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 12 13:11:15.187: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 12 13:11:53.483: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-qz5fv PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 13:11:53.483: INFO: >>> kubeConfig: /root/.kube/config
I0212 13:11:53.557125       8 log.go:172] (0xc00183a2c0) (0xc0024ef4a0) Create stream
I0212 13:11:53.557229       8 log.go:172] (0xc00183a2c0) (0xc0024ef4a0) Stream added, broadcasting: 1
I0212 13:11:53.561504       8 log.go:172] (0xc00183a2c0) Reply frame received for 1
I0212 13:11:53.561530       8 log.go:172] (0xc00183a2c0) (0xc0024da5a0) Create stream
I0212 13:11:53.561542       8 log.go:172] (0xc00183a2c0) (0xc0024da5a0) Stream added, broadcasting: 3
I0212 13:11:53.562355       8 log.go:172] (0xc00183a2c0) Reply frame received for 3
I0212 13:11:53.562371       8 log.go:172] (0xc00183a2c0) (0xc0027bf4a0) Create stream
I0212 13:11:53.562378       8 log.go:172] (0xc00183a2c0) (0xc0027bf4a0) Stream added, broadcasting: 5
I0212 13:11:53.563574       8 log.go:172] (0xc00183a2c0) Reply frame received for 5
I0212 13:11:53.871182       8 log.go:172] (0xc00183a2c0) Data frame received for 3
I0212 13:11:53.871259       8 log.go:172] (0xc0024da5a0) (3) Data frame handling
I0212 13:11:53.871311       8 log.go:172] (0xc0024da5a0) (3) Data frame sent
I0212 13:11:54.110733       8 log.go:172] (0xc00183a2c0) Data frame received for 1
I0212 13:11:54.110966       8 log.go:172] (0xc0024ef4a0) (1) Data frame handling
I0212 13:11:54.111029       8 log.go:172] (0xc0024ef4a0) (1) Data frame sent
I0212 13:11:54.111572       8 log.go:172] (0xc00183a2c0) (0xc0024ef4a0) Stream removed, broadcasting: 1
I0212 13:11:54.112276       8 log.go:172] (0xc00183a2c0) (0xc0024da5a0) Stream removed, broadcasting: 3
I0212 13:11:54.112995       8 log.go:172] (0xc00183a2c0) (0xc0027bf4a0) Stream removed, broadcasting: 5
I0212 13:11:54.113056       8 log.go:172] (0xc00183a2c0) (0xc0024ef4a0) Stream removed, broadcasting: 1
I0212 13:11:54.113065       8 log.go:172] (0xc00183a2c0) (0xc0024da5a0) Stream removed, broadcasting: 3
I0212 13:11:54.113077       8 log.go:172] (0xc00183a2c0) (0xc0027bf4a0) Stream removed, broadcasting: 5
Feb 12 13:11:54.113: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 13:11:54.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-qz5fv" for this suite.
Feb 12 13:12:20.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:12:20.403: INFO: namespace: e2e-tests-pod-network-test-qz5fv, resource: bindings, ignored listing per whitelist
Feb 12 13:12:20.415: INFO: namespace e2e-tests-pod-network-test-qz5fv deletion completed in 26.274099675s

• [SLOW TEST:65.462 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 13:12:20.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 12 13:12:20.655: INFO: Creating deployment "nginx-deployment"
Feb 12 13:12:20.672: INFO: Waiting for observed generation 1
Feb 12 13:12:23.452: INFO: Waiting for all required pods to come up
Feb 12 13:12:23.478: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 12 13:13:11.955: INFO: Waiting for deployment "nginx-deployment" to complete
Feb 12 13:13:11.970: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb 12 13:13:11.991: INFO: Updating deployment nginx-deployment
Feb 12 13:13:11.992: INFO: Waiting for observed generation 2
Feb 12 13:13:15.648: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 12 13:13:16.232: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 12 13:13:17.676: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 12 13:13:18.777: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 12 13:13:18.777: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 12 13:13:20.726: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 12 13:13:23.717: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb 12 13:13:23.718: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb 12 13:13:24.876: INFO: Updating deployment nginx-deployment
Feb 12 13:13:24.876: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb 12 13:13:25.659: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 12 13:13:29.021: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 12 13:13:29.758: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6lhnk/deployments/nginx-deployment,UID:4d123eb7-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428223,Generation:3,CreationTimestamp:2020-02-12 13:12:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-02-12 13:13:25 +0000 UTC 2020-02-12 13:13:25 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-12 13:13:27 +0000 UTC 2020-02-12 13:12:20 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb 12 13:13:31.498: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6lhnk/replicasets/nginx-deployment-5c98f8fb5,UID:6bab2ba7-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428214,Generation:3,CreationTimestamp:2020-02-12 13:13:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 4d123eb7-4d99-11ea-a994-fa163e34d433 0xc001f1f817 0xc001f1f818}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 12 13:13:31.499: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb 12 13:13:31.499: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6lhnk/replicasets/nginx-deployment-85ddf47c5d,UID:4d1dc94c-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428216,Generation:3,CreationTimestamp:2020-02-12 13:12:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 4d123eb7-4d99-11ea-a994-fa163e34d433 0xc001f1f8d7 0xc001f1f8d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb 12 13:13:31.871: INFO: Pod "nginx-deployment-5c98f8fb5-5x476" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5x476,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-5c98f8fb5-5x476,UID:6c34b411-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428146,Generation:0,CreationTimestamp:2020-02-12 13:13:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bab2ba7-4d99-11ea-a994-fa163e34d433 0xc001eb85f7 0xc001eb85f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eb8750} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eb8800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:13 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-12 13:13:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.872: INFO: Pod "nginx-deployment-5c98f8fb5-9kkvq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9kkvq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-5c98f8fb5-9kkvq,UID:6bb28633-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428114,Generation:0,CreationTimestamp:2020-02-12 13:13:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bab2ba7-4d99-11ea-a994-fa163e34d433 0xc001eb8987 0xc001eb8988}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eb89f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eb8a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-12 13:13:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.873: INFO: Pod "nginx-deployment-5c98f8fb5-b9pst" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-b9pst,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-5c98f8fb5-b9pst,UID:747a45c9-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428176,Generation:0,CreationTimestamp:2020-02-12 13:13:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bab2ba7-4d99-11ea-a994-fa163e34d433 0xc001eb8bc7 0xc001eb8bc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eb8c30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eb8c50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:26 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.873: INFO: Pod "nginx-deployment-5c98f8fb5-bc8qq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bc8qq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-5c98f8fb5-bc8qq,UID:7497f534-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428198,Generation:0,CreationTimestamp:2020-02-12 13:13:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bab2ba7-4d99-11ea-a994-fa163e34d433 0xc001eb8d37 0xc001eb8d38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eb8da0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eb8dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.874: INFO: Pod "nginx-deployment-5c98f8fb5-bpmzl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bpmzl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-5c98f8fb5-bpmzl,UID:6bb913ff-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428132,Generation:0,CreationTimestamp:2020-02-12 13:13:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bab2ba7-4d99-11ea-a994-fa163e34d433 0xc001eb8e37 0xc001eb8e38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eb8ec0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eb8ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-12 13:13:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.874: INFO: Pod "nginx-deployment-5c98f8fb5-c25c9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-c25c9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-5c98f8fb5-c25c9,UID:6c28b774-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428140,Generation:0,CreationTimestamp:2020-02-12 13:13:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bab2ba7-4d99-11ea-a994-fa163e34d433 0xc001eb9017 0xc001eb9018}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eb9080} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eb90a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-12 13:13:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.875: INFO: Pod "nginx-deployment-5c98f8fb5-dp7fs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dp7fs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-5c98f8fb5-dp7fs,UID:74973032-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428200,Generation:0,CreationTimestamp:2020-02-12 13:13:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bab2ba7-4d99-11ea-a994-fa163e34d433 0xc001eb9187 0xc001eb9188}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eb9280} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eb92a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.875: INFO: Pod "nginx-deployment-5c98f8fb5-gznqg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gznqg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-5c98f8fb5-gznqg,UID:7470a94f-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428219,Generation:0,CreationTimestamp:2020-02-12 13:13:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bab2ba7-4d99-11ea-a994-fa163e34d433 0xc001eb9317 0xc001eb9318}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eb9380} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eb93a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-12 13:13:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.875: INFO: Pod "nginx-deployment-5c98f8fb5-jcqm7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jcqm7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-5c98f8fb5-jcqm7,UID:749798a1-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428201,Generation:0,CreationTimestamp:2020-02-12 13:13:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bab2ba7-4d99-11ea-a994-fa163e34d433 0xc001eb9467 0xc001eb9468}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eb94d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eb94f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.876: INFO: Pod "nginx-deployment-5c98f8fb5-ktm6m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ktm6m,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-5c98f8fb5-ktm6m,UID:74b654ab-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428210,Generation:0,CreationTimestamp:2020-02-12 13:13:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bab2ba7-4d99-11ea-a994-fa163e34d433 0xc001eb9567 0xc001eb9568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eb95d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eb95f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.876: INFO: Pod "nginx-deployment-5c98f8fb5-mfxb9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mfxb9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-5c98f8fb5-mfxb9,UID:74983132-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428193,Generation:0,CreationTimestamp:2020-02-12 13:13:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bab2ba7-4d99-11ea-a994-fa163e34d433 0xc001eb97a7 0xc001eb97a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eb9810} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eb9830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.877: INFO: Pod "nginx-deployment-5c98f8fb5-n6bhw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n6bhw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-5c98f8fb5-n6bhw,UID:6bb887a3-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428138,Generation:0,CreationTimestamp:2020-02-12 13:13:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bab2ba7-4d99-11ea-a994-fa163e34d433 0xc001eb98a7 0xc001eb98a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eb9910} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eb9930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-12 13:13:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.877: INFO: Pod "nginx-deployment-5c98f8fb5-wvw6q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wvw6q,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-5c98f8fb5-wvw6q,UID:747ac4e2-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428235,Generation:0,CreationTimestamp:2020-02-12 13:13:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 6bab2ba7-4d99-11ea-a994-fa163e34d433 0xc001eb99f7 0xc001eb99f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eb9a60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eb9a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-12 13:13:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.878: INFO: Pod "nginx-deployment-85ddf47c5d-4462k" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4462k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-4462k,UID:4d26addc-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428086,Generation:0,CreationTimestamp:2020-02-12 13:12:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc001eb9b47 0xc001eb9b48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eb9bb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eb9bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:12:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:12:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-02-12 13:12:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-12 13:13:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://909b14763426af33363e85b3ee3f8acc29ee02115651f217218c54a9b6860b5c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.879: INFO: Pod "nginx-deployment-85ddf47c5d-4jwch" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4jwch,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-4jwch,UID:74749b3f-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428178,Generation:0,CreationTimestamp:2020-02-12 13:13:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc001eb9cb7 0xc001eb9cb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eb9d30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eb9d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:26 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.879: INFO: Pod "nginx-deployment-85ddf47c5d-67zdl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-67zdl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-67zdl,UID:4d2c016f-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428061,Generation:0,CreationTimestamp:2020-02-12 13:12:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc001eb9dd7 0xc001eb9dd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eb9e40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eb9e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:12:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:12:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-02-12 13:12:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-12 13:13:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://08ed63f7fbfcaf1726f37de1a4744e5848d4a48f717e5fc3b1c7caa111f4b627}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.880: INFO: Pod "nginx-deployment-85ddf47c5d-6mzzm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6mzzm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-6mzzm,UID:74986d6a-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428196,Generation:0,CreationTimestamp:2020-02-12 13:13:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc001eb9f27 0xc001eb9f28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eb9fb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f3c180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.880: INFO: Pod "nginx-deployment-85ddf47c5d-7m6tx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7m6tx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-7m6tx,UID:74ae9cbb-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428204,Generation:0,CreationTimestamp:2020-02-12 13:13:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc001f3c1f7 0xc001f3c1f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f3c260} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f3c280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.881: INFO: Pod "nginx-deployment-85ddf47c5d-7vbls" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7vbls,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-7vbls,UID:4d290535-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428038,Generation:0,CreationTimestamp:2020-02-12 13:12:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc001f3c337 0xc001f3c338}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f3c460} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f3c480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:12:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:12:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-02-12 13:12:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-12 13:12:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9e7effca00798daadbe65136b7a3e92921e24e3c320d4448ff77ad52bd566d39}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.882: INFO: Pod "nginx-deployment-85ddf47c5d-bfqt4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bfqt4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-bfqt4,UID:4d2c2a8e-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428077,Generation:0,CreationTimestamp:2020-02-12 13:12:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc001f3c777 0xc001f3c778}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f3cc90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f3ccb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:12:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:12:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-02-12 13:12:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-12 13:13:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2a9d08deb1614ba9b8aa65f65fae13f28e62688a9a153b65f7fc80a0d7905d53}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.883: INFO: Pod "nginx-deployment-85ddf47c5d-gljr8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gljr8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-gljr8,UID:4d28885b-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428042,Generation:0,CreationTimestamp:2020-02-12 13:12:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc001f3ce27 0xc001f3ce28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f3d000} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f3d020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:12:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:12:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-12 13:12:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-12 13:12:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a0be52d108a45ff7f4e3ee6513c4ce5488adbdbff6396db81211347b468d2276}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.884: INFO: Pod "nginx-deployment-85ddf47c5d-gqvws" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gqvws,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-gqvws,UID:743db8d8-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428202,Generation:0,CreationTimestamp:2020-02-12 13:13:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc001f3d207 0xc001f3d208}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016d46f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0016d4710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-12 13:13:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.884: INFO: Pod "nginx-deployment-85ddf47c5d-grkm9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-grkm9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-grkm9,UID:4d484009-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428059,Generation:0,CreationTimestamp:2020-02-12 13:12:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc0016d4c37 0xc0016d4c38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016d4cb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0016d56c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:12:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:12:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-02-12 13:12:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-12 13:13:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://34ee34ded5fe5e82858fa52966429c7cb6c9743a97bf447c1b2a75e6bc7096be}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.884: INFO: Pod "nginx-deployment-85ddf47c5d-hwwjw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hwwjw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-hwwjw,UID:7496f1ad-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428186,Generation:0,CreationTimestamp:2020-02-12 13:13:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc0016d5b57 0xc0016d5b58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016d5bc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0016d5c40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.885: INFO: Pod "nginx-deployment-85ddf47c5d-j6zfj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-j6zfj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-j6zfj,UID:74740ff3-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428231,Generation:0,CreationTimestamp:2020-02-12 13:13:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc0016d5e97 0xc0016d5e98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0016d5f00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0016d5f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-12 13:13:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.885: INFO: Pod "nginx-deployment-85ddf47c5d-kbqqh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kbqqh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-kbqqh,UID:74980b25-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428197,Generation:0,CreationTimestamp:2020-02-12 13:13:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc0016d5fd7 0xc0016d5fd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023ac0d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023ac0f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.886: INFO: Pod "nginx-deployment-85ddf47c5d-mkz5v" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mkz5v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-mkz5v,UID:4d2c45bf-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428065,Generation:0,CreationTimestamp:2020-02-12 13:12:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc0023ac167 0xc0023ac168}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023ac340} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023ac360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:12:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:12:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-02-12 13:12:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-12 13:13:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0bde44852be5fcc5ac1007c985fd3742b9d1fabff9020ffebe19ff22e1b50694}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.886: INFO: Pod "nginx-deployment-85ddf47c5d-n2v79" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n2v79,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-n2v79,UID:74952052-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428199,Generation:0,CreationTimestamp:2020-02-12 13:13:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc0023ac487 0xc0023ac488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023ac590} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023ac980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.886: INFO: Pod "nginx-deployment-85ddf47c5d-pj4l5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pj4l5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-pj4l5,UID:4d2c3961-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428068,Generation:0,CreationTimestamp:2020-02-12 13:12:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc0023aca27 0xc0023aca28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023acaa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023acdc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:12:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:12:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-02-12 13:12:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-12 13:13:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://daaa3c7abe67f1cd9056c5ef1d835223d3cc0eac02d9481bd0ff5f7eeb330bc7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.887: INFO: Pod "nginx-deployment-85ddf47c5d-s2bkl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s2bkl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-s2bkl,UID:74ae8567-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428208,Generation:0,CreationTimestamp:2020-02-12 13:13:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc001002097 0xc001002098}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001002110} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001002130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.887: INFO: Pod "nginx-deployment-85ddf47c5d-swjhr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-swjhr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-swjhr,UID:74ae8a79-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428207,Generation:0,CreationTimestamp:2020-02-12 13:13:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc0010021a7 0xc0010021a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001002280} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0010022a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.887: INFO: Pod "nginx-deployment-85ddf47c5d-z226x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-z226x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-z226x,UID:74ae92c7-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428206,Generation:0,CreationTimestamp:2020-02-12 13:13:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc001002317 0xc001002318}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001002390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0010025b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 12 13:13:31.888: INFO: Pod "nginx-deployment-85ddf47c5d-zzpqg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zzpqg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6lhnk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6lhnk/pods/nginx-deployment-85ddf47c5d-zzpqg,UID:74aea7a3-4d99-11ea-a994-fa163e34d433,ResourceVersion:21428211,Generation:0,CreationTimestamp:2020-02-12 13:13:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 4d1dc94c-4d99-11ea-a994-fa163e34d433 0xc001002a67 0xc001002a68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rpmvp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rpmvp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rpmvp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001002ad0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001002af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:13:27 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 13:13:31.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-6lhnk" for this suite.
Feb 12 13:14:55.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:14:55.576: INFO: namespace: e2e-tests-deployment-6lhnk, resource: bindings, ignored listing per whitelist
Feb 12 13:14:57.568: INFO: namespace e2e-tests-deployment-6lhnk deletion completed in 1m23.848500898s

• [SLOW TEST:157.153 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 13:14:57.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-aadc4d58-4d99-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume configMaps
Feb 12 13:14:58.644: INFO: Waiting up to 5m0s for pod "pod-configmaps-ab193dba-4d99-11ea-b4b9-0242ac110005" in namespace "e2e-tests-configmap-xxm7d" to be "success or failure"
Feb 12 13:14:58.756: INFO: Pod "pod-configmaps-ab193dba-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 111.578432ms
Feb 12 13:15:02.246: INFO: Pod "pod-configmaps-ab193dba-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.601885383s
Feb 12 13:15:04.263: INFO: Pod "pod-configmaps-ab193dba-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.619014834s
Feb 12 13:15:06.272: INFO: Pod "pod-configmaps-ab193dba-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.627617985s
Feb 12 13:15:08.301: INFO: Pod "pod-configmaps-ab193dba-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.656800626s
Feb 12 13:15:10.320: INFO: Pod "pod-configmaps-ab193dba-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.675379908s
Feb 12 13:15:12.397: INFO: Pod "pod-configmaps-ab193dba-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.752260606s
Feb 12 13:15:14.468: INFO: Pod "pod-configmaps-ab193dba-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.82355201s
Feb 12 13:15:16.516: INFO: Pod "pod-configmaps-ab193dba-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.87138175s
Feb 12 13:15:18.541: INFO: Pod "pod-configmaps-ab193dba-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.896905133s
Feb 12 13:15:20.564: INFO: Pod "pod-configmaps-ab193dba-4d99-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.919488808s
STEP: Saw pod success
Feb 12 13:15:20.564: INFO: Pod "pod-configmaps-ab193dba-4d99-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 13:15:20.574: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-ab193dba-4d99-11ea-b4b9-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Feb 12 13:15:22.051: INFO: Waiting for pod pod-configmaps-ab193dba-4d99-11ea-b4b9-0242ac110005 to disappear
Feb 12 13:15:22.326: INFO: Pod pod-configmaps-ab193dba-4d99-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 13:15:22.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-xxm7d" for this suite.
Feb 12 13:15:28.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:15:28.749: INFO: namespace: e2e-tests-configmap-xxm7d, resource: bindings, ignored listing per whitelist
Feb 12 13:15:28.904: INFO: namespace e2e-tests-configmap-xxm7d deletion completed in 6.554146167s

• [SLOW TEST:31.336 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 13:15:28.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-bd5df209-4d99-11ea-b4b9-0242ac110005
STEP: Creating a pod to test consume secrets
Feb 12 13:15:29.276: INFO: Waiting up to 5m0s for pod "pod-secrets-bd7513c5-4d99-11ea-b4b9-0242ac110005" in namespace "e2e-tests-secrets-b9psj" to be "success or failure"
Feb 12 13:15:29.290: INFO: Pod "pod-secrets-bd7513c5-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.092716ms
Feb 12 13:15:31.630: INFO: Pod "pod-secrets-bd7513c5-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.354117782s
Feb 12 13:15:33.641: INFO: Pod "pod-secrets-bd7513c5-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.365432614s
Feb 12 13:15:35.963: INFO: Pod "pod-secrets-bd7513c5-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.687537483s
Feb 12 13:15:38.011: INFO: Pod "pod-secrets-bd7513c5-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.735619792s
Feb 12 13:15:40.158: INFO: Pod "pod-secrets-bd7513c5-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.882122277s
Feb 12 13:15:42.173: INFO: Pod "pod-secrets-bd7513c5-4d99-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.897375203s
STEP: Saw pod success
Feb 12 13:15:42.173: INFO: Pod "pod-secrets-bd7513c5-4d99-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 13:15:42.192: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-bd7513c5-4d99-11ea-b4b9-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Feb 12 13:15:42.421: INFO: Waiting for pod pod-secrets-bd7513c5-4d99-11ea-b4b9-0242ac110005 to disappear
Feb 12 13:15:42.435: INFO: Pod pod-secrets-bd7513c5-4d99-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 13:15:42.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-b9psj" for this suite.
Feb 12 13:15:48.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:15:48.690: INFO: namespace: e2e-tests-secrets-b9psj, resource: bindings, ignored listing per whitelist
Feb 12 13:15:48.774: INFO: namespace e2e-tests-secrets-b9psj deletion completed in 6.324857366s
STEP: Destroying namespace "e2e-tests-secret-namespace-9qbbn" for this suite.
Feb 12 13:15:54.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:15:54.898: INFO: namespace: e2e-tests-secret-namespace-9qbbn, resource: bindings, ignored listing per whitelist
Feb 12 13:15:55.055: INFO: namespace e2e-tests-secret-namespace-9qbbn deletion completed in 6.280387279s

• [SLOW TEST:26.150 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 13:15:55.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 12 13:15:55.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-jfdm5'
Feb 12 13:15:57.830: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 12 13:15:57.830: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Feb 12 13:15:59.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-jfdm5'
Feb 12 13:16:00.070: INFO: stderr: ""
Feb 12 13:16:00.071: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 13:16:00.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jfdm5" for this suite.
Feb 12 13:16:06.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:16:06.396: INFO: namespace: e2e-tests-kubectl-jfdm5, resource: bindings, ignored listing per whitelist
Feb 12 13:16:06.496: INFO: namespace e2e-tests-kubectl-jfdm5 deletion completed in 6.408931742s

• [SLOW TEST:11.442 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 13:16:06.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-d3d49313-4d99-11ea-b4b9-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-d3d49313-4d99-11ea-b4b9-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 13:16:21.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-82r8g" for this suite.
Feb 12 13:16:45.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:16:45.281: INFO: namespace: e2e-tests-projected-82r8g, resource: bindings, ignored listing per whitelist
Feb 12 13:16:45.327: INFO: namespace e2e-tests-projected-82r8g deletion completed in 24.232538503s

• [SLOW TEST:38.830 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 12 13:16:45.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 12 13:16:45.535: INFO: Waiting up to 5m0s for pod "pod-eaefb1bf-4d99-11ea-b4b9-0242ac110005" in namespace "e2e-tests-emptydir-p2bqr" to be "success or failure"
Feb 12 13:16:45.642: INFO: Pod "pod-eaefb1bf-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 107.143496ms
Feb 12 13:16:47.912: INFO: Pod "pod-eaefb1bf-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376948429s
Feb 12 13:16:49.955: INFO: Pod "pod-eaefb1bf-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.419442545s
Feb 12 13:16:52.126: INFO: Pod "pod-eaefb1bf-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.5909149s
Feb 12 13:16:54.146: INFO: Pod "pod-eaefb1bf-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.610519093s
Feb 12 13:16:56.165: INFO: Pod "pod-eaefb1bf-4d99-11ea-b4b9-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.629688142s
Feb 12 13:16:58.190: INFO: Pod "pod-eaefb1bf-4d99-11ea-b4b9-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.654737266s
STEP: Saw pod success
Feb 12 13:16:58.190: INFO: Pod "pod-eaefb1bf-4d99-11ea-b4b9-0242ac110005" satisfied condition "success or failure"
Feb 12 13:16:58.205: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-eaefb1bf-4d99-11ea-b4b9-0242ac110005 container test-container: 
STEP: delete the pod
Feb 12 13:16:58.474: INFO: Waiting for pod pod-eaefb1bf-4d99-11ea-b4b9-0242ac110005 to disappear
Feb 12 13:16:58.504: INFO: Pod pod-eaefb1bf-4d99-11ea-b4b9-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 12 13:16:58.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-p2bqr" for this suite.
Feb 12 13:17:04.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 12 13:17:04.982: INFO: namespace: e2e-tests-emptydir-p2bqr, resource: bindings, ignored listing per whitelist
Feb 12 13:17:04.997: INFO: namespace e2e-tests-emptydir-p2bqr deletion completed in 6.365644077s

• [SLOW TEST:19.670 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSFeb 12 13:17:04.998: INFO: Running AfterSuite actions on all nodes
Feb 12 13:17:04.998: INFO: Running AfterSuite actions on node 1
Feb 12 13:17:04.998: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8990.629 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS